Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants' strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research.
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants’ strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research. PMID:26321997
ERIC Educational Resources Information Center
Valdez, Pablo; Reilly, Thomas; Waterhouse, Jim
2008-01-01
Cognitive performance is affected by an individual's characteristics and the environment, as well as by the nature of the task and the amount of practice at it. Mental performance tests range in complexity and include subjective estimates of mood, simple objective tests (reaction time), and measures of complex performance that require decisions to…
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
[Influence of mental rotation of objects on psychophysiological functions of women].
Chikina, L V; Fedorchuk, S V; Trushina, V A; Ianchuk, P I; Makarchuk, M Iu
2012-01-01
An integral part of activity of modern human beings is an involvement to work with the computer systems which, in turn, produces a nervous - emotional tension. Hence, a problem of control of the psychophysiological state of workmen with the purpose of health preservation and success of their activity and the problem of application of rehabilitational actions are actual. At present it is known that the efficiency of rehabilitational procedures rises following application of the complex of regenerative programs. Previously performed by us investigation showed that mental rotation is capable to compensate the consequences of a nervous - emotional tension. Therefore, in the present work we investigated how the complex of spatial tasks developed by us influences psychophysiological performances of tested women for which the psycho-emotional tension with the usage of computer technologies is more essential, and the procedure of mental rotation is more complex task for them, than for men. The complex of spatial tasks applied in the given work included: mental rotation of simple objects (letters and digits), mental rotation of complex objects (geometrical figures) and mental rotation of complex objects with the usage of a short-term memory. Execution of the complex of spatial tasks reduces the time of simple and complex sensomotor response, raises parameters of a short-term memory, brain work capacity and improves nervous processes. Collectively, mental rotation of objects can be recommended as a rehabilitational resource for compensation of consequences of any psycho-emotional strain, both for men, and for women.
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
Improved methods of performing coherent optical correlation
NASA Technical Reports Server (NTRS)
Husain-Abidi, A. S.
1972-01-01
Coherent optical correlators are described in which complex spatial filters are recorded by a quasi-Fourier transform method. The high-pass spatial filtering effects (due to the dynamic range of photographic films) normally encountered in Vander Lugt type complex filters are not present in this system. Experimental results for both transmittive as well as reflective objects are presented. Experiments are also performed by illuminating the object with diffused light. A correlator using paraboloidal mirror segments as the Fourier-transforming element is also described.
ERIC Educational Resources Information Center
Housen, Alex, Ed.; Kuiken, Folkert, Ed.; Vedder, Ineke, Ed.
2012-01-01
Research into complexity, accuracy and fluency (CAF) as basic dimensions of second language performance, proficiency and development has received increased attention in SLA. However, the larger picture in this field of research is often obscured by the breadth of scope, multiple objectives and lack of clarity as to how complexity, accuracy and…
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
Battling Arrow's Paradox to Discover Robust Water Management Alternatives
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Hadka, D.
2013-12-01
This study explores whether or not Arrow's Impossibility Theorem, a theory of social choice, affects the formulation of water resources systems planning problems. The theorem discusses creating an aggregation function for voters choosing from more than three alternatives for society. The Impossibility Theorem is also called Arrow's Paradox, because when trying to add more voters, a single individual's preference will dictate the optimal group decision. In the context of water resources planning, our study is motivated by recent theoretical work that has generalized the insights for Arrow's Paradox to the design of complex engineered systems. In this framing of the paradox, states of society are equivalent to water planning or design alternatives, and the voters are equivalent to multiple planning objectives (e.g. minimizing cost or maximizing performance). Seen from this point of view, multi-objective water planning problems are functionally equivalent to the social choice problem described above. Traditional solutions to such multi-objective problems aggregate multiple performance measures into a single mathematical objective. The Theorem implies that a subset of performance concerns will inadvertently dictate the overall design evaluations in unpredictable ways using such an aggregation. We suggest that instead of aggregation, an explicit many-objective approach to water planning can help overcome the challenges posed by Arrow's Paradox. Many-objective planning explicitly disaggregates measures of performance while supporting the discovery of the planning tradeoffs, employing multiobjective evolutionary algorithms (MOEAs) to find solutions. Using MOEA-based search to address Arrow's Paradox requires that the MOEAs perform robustly with increasing problem complexity, such as adding additional objectives and/or decisions. This study uses comprehensive diagnostic evaluation of MOEA search performance across multiple problem formulations (both aggregated and many-objective) to show whether or not aggregating performance measures biases decision making. In this study, we explore this hypothesis using an urban water portfolio management case study in the Lower Rio Grande Valley. The diagnostic analysis shows that modern self-adaptive MOEA search is efficient, effective, and reliable for the more complex many-objective LRGV planning formulations. Results indicate that although many classical water systems planning frameworks seek to account for multiple objectives, the common practice of reducing the problem into one or more highly aggregated performance measures can severely and negatively bias planning decisions.
Richardson, Miles; Hunt, Thomas E; Richardson, Cassandra
2014-12-01
This paper presents a methodology to control construction task complexity and examined the relationships between construction performance and spatial and mathematical abilities in children. The study included three groups of children (N = 96); ages 7-8, 10-11, and 13-14 years. Each group constructed seven pre-specified objects. The study replicated and extended previous findings that indicated that the extent of component symmetry and variety, and the number of components for each object and available for selection, significantly predicted construction task difficulty. Results showed that this methodology is a valid and reliable technique for assessing and predicting construction play task difficulty. Furthermore, construction play performance predicted mathematical attainment independently of spatial ability.
Remediation management of complex sites using an adaptive site management approach.
Price, John; Spreng, Carl; Hawley, Elisabeth L; Deeb, Rula
2017-12-15
Complex sites require a disproportionate amount of resources for environmental remediation and long timeframes to achieve remediation objectives, due to their complex geologic conditions, hydrogeologic conditions, geochemical conditions, contaminant-related conditions, large scale of contamination, and/or non-technical challenges. A recent team of state and federal environmental regulators, federal agency representatives, industry experts, community stakeholders, and academia worked together as an Interstate Technology & Regulatory Council (ITRC) team to compile resources and create new guidance on the remediation management of complex sites. This article summarizes the ITRC team's recommended process for addressing complex sites through an adaptive site management approach. The team provided guidance for site managers and other stakeholders to evaluate site complexities and determine site remediation potential, i.e., whether an adaptive site management approach is warranted. Adaptive site management was described as a comprehensive, flexible approach to iteratively evaluate and adjust the remedial strategy in response to remedy performance. Key aspects of adaptive site management were described, including tools for revising and updating the conceptual site model (CSM), the importance of setting interim objectives to define short-term milestones on the journey to achieving site objectives, establishing a performance model and metrics to evaluate progress towards meeting interim objectives, and comparing actual with predicted progress during scheduled periodic evaluations, and establishing decision criteria for when and how to adapt/modify/revise the remedial strategy in response to remedy performance. Key findings will be published in an ITRC Technical and Regulatory guidance document in 2017 and free training webinars will be conducted. More information is available at www.itrc-web.org. Copyright © 2017 Elsevier Ltd. All rights reserved.
Energy absorption capabilities of complex thin walled structures
NASA Astrophysics Data System (ADS)
Tarlochan, F.; AlKhatib, Sami
2017-10-01
Thin walled structures have been used in the area of energy absorption during an event of a crash. A lot of work has been done on tubular structures. Due to limitation of manufacturing process, complex geometries were dismissed as potential solutions. With the advancement in metal additive manufacturing, complex geometries can be realized. As a motivation, the objective of this study is to investigate computationally the crash performance of complex tubular structures. Five designs were considered. In was found that complex geometries have better crashworthiness performance than standard tubular structures used currently.
Gaze entropy reflects surgical task load.
Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés
2016-11-01
Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
Factors influencing visual search in complex driving environments.
DOT National Transportation Integrated Search
2016-10-01
The objective of this study was to describe and model the effects of varied roadway environment factors on drivers perceived complexity, with the goal of further understanding conditions for optimal driver behavior and performance. This was invest...
KBGIS-II: A knowledge-based geographic information system
NASA Technical Reports Server (NTRS)
Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj
1986-01-01
The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.
Online Sentence Reading in People With Aphasia: Evidence From Eye Tracking
Knilans, Jessica
2015-01-01
Purpose There is a lot of evidence that people with aphasia have more difficulty understanding structurally complex sentences (e.g., object clefts) than simpler sentences (subject clefts). However, subject clefts also occur more frequently in English than object clefts. Thus, it is possible that both structural complexity and frequency affect how people with aphasia understand these structures. Method Nine people with aphasia and 8 age-matched controls participated in the study. The stimuli consisted of 24 object cleft and 24 subject cleft sentences. The task was eye tracking during reading, which permits a more fine-grained analysis of reading performance than measures such as self-paced reading. Results As expected, controls had longer reading times for critical regions in object cleft sentences compared with subject cleft sentences. People with aphasia showed the predicted effects of structural frequency. Effects of structural complexity in people with aphasia did not emerge on their first pass through the sentence but were observed when they were rereading critical regions of complex sentences. Conclusions People with aphasia are sensitive to both structural complexity and structural frequency when reading. However, people with aphasia may use different reading strategies than controls when confronted with relatively infrequent and complex sentence structures. PMID:26383779
Online Sentence Reading in People With Aphasia: Evidence From Eye Tracking.
Knilans, Jessica; DeDe, Gayle
2015-11-01
There is a lot of evidence that people with aphasia have more difficulty understanding structurally complex sentences (e.g., object clefts) than simpler sentences (subject clefts). However, subject clefts also occur more frequently in English than object clefts. Thus, it is possible that both structural complexity and frequency affect how people with aphasia understand these structures. Nine people with aphasia and 8 age-matched controls participated in the study. The stimuli consisted of 24 object cleft and 24 subject cleft sentences. The task was eye tracking during reading, which permits a more fine-grained analysis of reading performance than measures such as self-paced reading. As expected, controls had longer reading times for critical regions in object cleft sentences compared with subject cleft sentences. People with aphasia showed the predicted effects of structural frequency. Effects of structural complexity in people with aphasia did not emerge on their first pass through the sentence but were observed when they were rereading critical regions of complex sentences. People with aphasia are sensitive to both structural complexity and structural frequency when reading. However, people with aphasia may use different reading strategies than controls when confronted with relatively infrequent and complex sentence structures.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
R&D 100, 2016: Pyomo 4.0 â Python Optimization Modeling Objects
Hart, William; Laird, Carl; Siirola, John
2018-06-13
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
2014-01-01
This paper analyses how different coordination modes and different multiobjective decision making approaches interfere with each other in hierarchical organizations. The investigation is based on an agent-based simulation. We apply a modified NK-model in which we map multiobjective decision making as adaptive walk on multiple performance landscapes, whereby each landscape represents one objective. We find that the impact of the coordination mode on the performance and the speed of performance improvement is critically affected by the selected multiobjective decision making approach. In certain setups, the performances achieved with the more complex multiobjective decision making approaches turn out to be less sensitive to the coordination mode than the performances achieved with the less complex multiobjective decision making approaches. Furthermore, we present results on the impact of the nature of interactions among decisions on the achieved performance in multiobjective setups. Our results give guidance on how to control the performance contribution of objectives to overall performance and answer the question how effective certain multiobjective decision making approaches perform under certain circumstances (coordination mode and interdependencies among decisions). PMID:25152926
Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback
NASA Astrophysics Data System (ADS)
Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai
2012-01-01
With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.
ERIC Educational Resources Information Center
Balthazar, Catherine H.; Scott, Cheryl M.
2018-01-01
Purpose: This study investigated the effects of a complex sentence treatment at 2 dosage levels on language performance of 30 school-age children ages 10-14 years with specific language impairment. Method: Three types of complex sentences (adverbial, object complement, relative) were taught in sequence in once or twice weekly dosage conditions.…
Wong, Chi Wah; Olafsson, Valur; Plank, Markus; Snider, Joseph; Halgren, Eric; Poizner, Howard; Liu, Thomas T.
2014-01-01
In the real world, learning often proceeds in an unsupervised manner without explicit instructions or feedback. In this study, we employed an experimental paradigm in which subjects explored an immersive virtual reality environment on each of two days. On day 1, subjects implicitly learned the location of 39 objects in an unsupervised fashion. On day 2, the locations of some of the objects were changed, and object location recall performance was assessed and found to vary across subjects. As prior work had shown that functional magnetic resonance imaging (fMRI) measures of resting-state brain activity can predict various measures of brain performance across individuals, we examined whether resting-state fMRI measures could be used to predict object location recall performance. We found a significant correlation between performance and the variability of the resting-state fMRI signal in the basal ganglia, hippocampus, amygdala, thalamus, insula, and regions in the frontal and temporal lobes, regions important for spatial exploration, learning, memory, and decision making. In addition, performance was significantly correlated with resting-state fMRI connectivity between the left caudate and the right fusiform gyrus, lateral occipital complex, and superior temporal gyrus. Given the basal ganglia's role in exploration, these findings suggest that tighter integration of the brain systems responsible for exploration and visuospatial processing may be critical for learning in a complex environment. PMID:25286145
EEG signatures accompanying auditory figure-ground segregation
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István
2017-01-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185
Networks consolidation program: Maintenance and Operations (M&O) staffing estimates
NASA Technical Reports Server (NTRS)
Goodwin, J. P.
1981-01-01
The Mark IV-A consolidate deep space and high elliptical Earth orbiter (HEEO) missions tracking and implements centralized control and monitoring at the deep space communications complexes (DSCC). One of the objectives of the network design is to reduce maintenance and operations (M&O) costs. To determine if the system design meets this objective an M&O staffing model for Goldstone was developed which was used to estimate the staffing levels required to support the Mark IV-A configuration. The study was performed for the Goldstone complex and the program office translated these estimates for the overseas complexes to derive the network estimates.
NASA Astrophysics Data System (ADS)
Graham, James; Ternovskiy, Igor V.
2013-06-01
We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.
Using measures of information content and complexity of time series as hydrologic metrics
USDA-ARS?s Scientific Manuscript database
The information theory has been previously used to develop metrics that allowed to characterize temporal patterns in soil moisture dynamics, and to evaluate and to compare performance of soil water flow models. The objective of this study was to apply information and complexity measures to characte...
1988-09-30
the symbolic racism research has directly assessed value attributions or measured their relation - ship to attitudes or behavior , which seems to be a...of attitude , evaluation, or emotion, while others involve the abil- ity to make complex Judgments, perform complex behaviors , or be characterized by a...adequate choice of the proper behavior to perform. Attitudes , values. Attitudes are evaluations of things, including people, objects, or behaviors
Are we under-utilizing the talents of primary care personnel? A job analytic examination
Hysong, Sylvia J; Best, Richard G; Moore, Frank I
2007-01-01
Background Primary care staffing decisions are often made unsystematically, potentially leading to increased costs, dissatisfaction, turnover, and reduced quality of care. This article aims to (1) catalogue the domain of primary care tasks, (2) explore the complexity associated with these tasks, and (3) examine how tasks performed by different job titles differ in function and complexity, using Functional Job Analysis to develop a new tool for making evidence-based staffing decisions. Methods Seventy-seven primary care personnel from six US Department of Veterans Affairs (VA) Medical Centers, representing six job titles, participated in two-day focus groups to generate 243 unique task statements describing the content of VA primary care. Certified job analysts rated tasks on ten dimensions representing task complexity, skills, autonomy, and error consequence. Two hundred and twenty-four primary care personnel from the same clinics then completed a survey indicating whether they performed each task. Tasks were catalogued using an adaptation of an existing classification scheme; complexity differences were tested via analysis of variance. Results Objective one: Task statements were categorized into four functions: service delivery (65%), administrative duties (15%), logistic support (9%), and workforce management (11%). Objective two: Consistent with expectations, 80% of tasks received ratings at or below the mid-scale value on all ten scales. Objective three: Service delivery and workforce management tasks received higher ratings on eight of ten scales (multiple functional complexity dimensions, autonomy, human error consequence) than administrative and logistic support tasks. Similarly, tasks performed by more highly trained job titles received higher ratings on six of ten scales than tasks performed by lower trained job titles. Contrary to expectations, the distribution of tasks across functions did not significantly vary by job title. Conclusion Primary care personnel are not being utilized to the extent of their training; most personnel perform many tasks that could reasonably be performed by personnel with less training. Primary care clinics should use evidence-based information to optimize job-person fit, adjusting clinic staff mix and allocation of work across staff to enhance efficiency and effectiveness. PMID:17397534
An analysis of relational complexity in an air traffic control conflict detection task.
Boag, Christine; Neal, Andrew; Loft, Shayne; Halford, Graeme S
2006-11-15
Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
A Study of the Congruency of Competencies and Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Jones, John Wilbur, Jr.
The job of the 4-H extension agent involves fairly complex levels of performance. The curriculum for the extension agent program should produce youth workers who have the ability to perform competently and who possess the basic concepts and values required to function effectively. Performance objectives were written for each competency considered…
EEG signatures accompanying auditory figure-ground segregation.
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István
2016-11-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf
2017-08-01
A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or guidance system able to read DICOM data sets, expanding the possibilities of embedded planning software.
Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John N.
1997-01-01
A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
INSTRUCTIONAL PERFORMANCE OBJECTIVES FOR A COURSE IN GENERAL BIOLOGY.
ERIC Educational Resources Information Center
MAFFETT, JAMES E.
THE INSTRUCTIONAL OBJECTIVES OF A FRESHMAN COURSE IN GENERAL BIOLOGY ARE ORGANIZED FOR THE STUDENT'S EASE OF REFERENCE. THE COURSE IS OUTLINED, BY DEGREE OF COMPLEXITY, AS FOLLOWS--(1) ORIENTATION AND INTRODUCTION, (2) ORIGIN AND ORGANIZATION OF LIFE, (3) CYTOLOGY, (4) METABOLISM AND BIOCHEMISTRY, (5) PLANT LIFE (VASCULAR AND NON-VASCULAR), (6)…
Improving CNN Performance Accuracies With Min-Max Objective.
Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning
2017-06-09
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.
Object as a model of intelligent robot in the virtual workspace
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.
2015-11-01
The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The results can be further used during designing of the complete manufacturing system, which takes into account all the involved machines and has the form of an object-oriented model.
KBGIS-2: A knowledge-based geographic information system
NASA Technical Reports Server (NTRS)
Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.
1986-01-01
The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William; Laird, Carl; Siirola, John
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Meshing complex macro-scale objects into self-assembling bricks
Hacohen, Adar; Hanniel, Iddo; Nikulshin, Yasha; Wolfus, Shuki; Abu-Horowitz, Almogit; Bachelet, Ido
2015-01-01
Self-assembly provides an information-economical route to the fabrication of objects at virtually all scales. However, there is no known algorithm to program self-assembly in macro-scale, solid, complex 3D objects. Here such an algorithm is described, which is inspired by the molecular assembly of DNA, and based on bricks designed by tetrahedral meshing of arbitrary objects. Assembly rules are encoded by topographic cues imprinted on brick faces while attraction between bricks is provided by embedded magnets. The bricks can then be mixed in a container and agitated, leading to properly assembled objects at high yields and zero errors. The system and its assembly dynamics were characterized by video and audio analysis, enabling the precise time- and space-resolved characterization of its performance and accuracy. Improved designs inspired by our system could lead to successful implementation of self-assembly at the macro-scale, allowing rapid, on-demand fabrication of objects without the need for assembly lines. PMID:26226488
Online fully automated three-dimensional surface reconstruction of unknown objects
NASA Astrophysics Data System (ADS)
Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David
2015-04-01
This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.
Optimization of wastewater treatment plant operation for greenhouse gas mitigation.
Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C
2015-11-01
This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Object-processing neural efficiency differentiates object from spatial visualizers.
Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria
2008-11-19
The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
Deane-Coe, Kirsten K; Sarvary, Mark A; Owens, Thomas G
2017-01-01
In an undergraduate introductory biology laboratory course, we used a summative assessment to directly test the learning objective that students will be able to apply course material to increasingly novel and complex situations. Using a factorial framework, we developed multiple true-false questions to fall along axes of novelty and complexity, which resulted in four categories of questions: familiar content and low complexity (category A); novel content and low complexity (category B); familiar content and high complexity (category C); and novel content and high complexity (category D). On average, students scored more than 70% on all questions, indicating that the course largely met this learning objective. However, students scored highest on questions in category A, likely because they were most similar to course content, and lowest on questions in categories C and D. While we anticipated students would score equally on questions for which either novelty or complexity was altered (but not both), we observed that student scores in category C were lower than in category B. Furthermore, students performed equally poorly on all questions for which complexity was higher (categories C and D), even those containing familiar content, suggesting that application of course material to increasingly complex situations is particularly challenging to students. © 2017 K. K. Deane-Coe et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
NASA Astrophysics Data System (ADS)
Domercant, Jean Charles
The combination of today's national security environment and mandated acquisition policies makes it necessary for military systems to interoperate with each other to greater degrees. This growing interdependency results in complex Systems-of-Systems (SoS) that only continue to grow in complexity to meet evolving capability needs. Thus, timely and affordable acquisition becomes more difficult, especially in the face of mounting budgetary pressures. To counter this, architecting principles must be applied to SoS design. The research objective is to develop an Architecture Real Options Complexity-Based Valuation Methodology (ARC-VM) suitable for acquisition-level decision making, where there is a stated desire for more informed tradeoffs between cost, schedule, and performance during the early phases of design. First, a framework is introduced to measure architecture complexity as it directly relates to military SoS. Development of the framework draws upon a diverse set of disciplines, including Complexity Science, software architecting, measurement theory, and utility theory. Next, a Real Options based valuation strategy is developed using techniques established for financial stock options that have recently been adapted for use in business and engineering decisions. The derived complexity measure provides architects with an objective measure of complexity that focuses on relevant complex system attributes. These attributes are related to the organization and distribution of SoS functionality and the sharing and processing of resources. The use of Real Options provides the necessary conceptual and visual framework to quantifiably and traceably combine measured architecture complexity, time-valued performance levels, as well as programmatic risks and uncertainties. An example suppression of enemy air defenses (SEAD) capability demonstrates the development and usefulness of the resulting architecture complexity & Real Options based valuation methodology. Different portfolios of candidate system types are used to generate an array of architecture alternatives that are then evaluated using an engagement model. This performance data is combined with both measured architecture complexity and programmatic data to assign an acquisition value to each alternative. This proves useful when selecting alternatives most likely to meet current and future capability needs.
Short temporal asynchrony disrupts visual object recognition
Singer, Jedediah M.; Kreiman, Gabriel
2014-01-01
Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738
NASA Astrophysics Data System (ADS)
Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.
2016-09-01
A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.
The architecture of Newton, a general-purpose dynamics simulator
NASA Technical Reports Server (NTRS)
Cremer, James F.; Stewart, A. James
1989-01-01
The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.
Human interaction with robotic systems: performance and workload evaluations.
Reinerman-Jones, L; Barber, D J; Szalma, J L; Hancock, P A
2017-10-01
We first tested the effect of differing tactile informational forms (i.e. directional cues vs. static cues vs. dynamic cues) on objective performance and perceived workload in a collaborative human-robot task. A second experiment evaluated the influence of task load and informational message type (i.e. single words vs. grouped phrases) on that same collaborative task. In both experiments, the relationship of personal characteristics (attentional control and spatial ability) to performance and workload was also measured. In addition to objective performance and self-report of cognitive load, we evaluated different physiological responses in each experiment. Results showed a performance-workload association for directional cues, message type and task load. EEG measures however, proved generally insensitive to such task load manipulations. Where significant EEG effects were observed, right hemisphere amplitude differences predominated, although unexpectedly these latter relationships were negative. Although EEG measures were partially associated with performance, they appear to possess limited utility as measures of workload in association with tactile displays. Practitioner Summary: As practitioners look to take advantage of innovative tactile displays in complex operational realms like human-robotic interaction, associated performance effects are mediated by cognitive workload. Despite some patterns of association, reliable reflections of operator state can be difficult to discern and employ as the number, complexity and sophistication of these respective measures themselves increase.
Overview of Intelligent Systems and Operations Development
NASA Technical Reports Server (NTRS)
Pallix, Joan; Dorais, Greg; Penix, John
2004-01-01
To achieve NASA's ambitious mission objectives for the future, aircraft and spacecraft will need intelligence to take the correct action in a variety of circumstances. Vehicle intelligence can be defined as the ability to "do the right thing" when faced with a complex decision-making situation. It will be necessary to implement integrated autonomous operations and low-level adaptive flight control technologies to direct actions that enhance the safety and success of complex missions despite component failures, degraded performance, operator errors, and environment uncertainty. This paper will describe the array of technologies required to meet these complex objectives. This includes the integration of high-level reasoning and autonomous capabilities with multiple subsystem controllers for robust performance. Future intelligent systems will use models of the system, its environment, and other intelligent agents with which it interacts. They will also require planners, reasoning engines, and adaptive controllers that can recommend or execute commands enabling the system to respond intelligently. The presentation will also address the development of highly dependable software, which is a key component to ensure the reliability of intelligent systems.
NASA Astrophysics Data System (ADS)
Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.
2017-05-01
In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.
Integrated modeling tool for performance engineering of complex computer systems
NASA Technical Reports Server (NTRS)
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Mood states modulate complexity in heartbeat dynamics: A multiscale entropy analysis
NASA Astrophysics Data System (ADS)
Valenza, G.; Nardelli, M.; Bertschy, G.; Lanata, A.; Scilingo, E. P.
2014-07-01
This paper demonstrates that heartbeat complex dynamics is modulated by different pathological mental states. Multiscale entropy analysis was performed on R-R interval series gathered from the electrocardiogram of eight bipolar patients who exhibited mood states among depression, hypomania, and euthymia, i.e., good affective balance. Three different methodologies for the choice of the sample entropy radius value were also compared. We show that the complexity level can be used as a marker of mental states being able to discriminate among the three pathological mood states, suggesting to use heartbeat complexity as a more objective clinical biomarker for mental disorders.
Advances in Modal Analysis Using a Robust and Multiscale Method
NASA Astrophysics Data System (ADS)
Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.
2010-12-01
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
Development of Fully-Integrated Micromagnetic Actuator Technologies
2015-07-13
nonexistent because of certain design and fabrication challenges— primarily the inability to integrate high-performance, permanent - magnet ( magnetically ... efficiency necessary for certain applications. To enable the development of high-performance magnetic actuator technologies, the original research plan...developed permanent - magnet materials in more complex microfabrication process flows Objective 2: Design, model, and optimize a novel multi- magnet
Vijayakumar, A; Rosen, Joseph
2017-06-12
Recording digital holograms without wave interference simplifies the optical systems, increases their power efficiency and avoids complicated aligning procedures. We propose and demonstrate a new technique of digital hologram acquisition without two-wave interference. Incoherent light emitted from an object propagates through a random-like coded phase mask and recorded directly without interference by a digital camera. In the training stage of the system, a point spread hologram (PSH) is first recorded by modulating the light diffracted from a point object by the coded phase masks. At least two different masks should be used to record two different intensity distributions at all possible axial locations. The various recorded patterns at every axial location are superposed in the computer to obtain a complex valued PSH library cataloged to its axial location. Following the training stage, an object is placed within the axial boundaries of the PSH library and the light diffracted from the object is once again modulated by the same phase masks. The intensity patterns are recorded and superposed exactly as the PSH to yield a complex hologram of the object. The object information at any particular plane is reconstructed by a cross-correlation between the complex valued hologram and the appropriate element of the PSH library. The characteristics and the performance of the proposed system were compared with an equivalent regular imaging system.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
Neil, Amanda; Pfeffer, Sally; Burnett, Leslie
2013-01-01
This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.
Righi, Angela Weber; Wachs, Priscila; Saurin, Tarcísio Abreu
2012-01-01
Complexity theory has been adopted by a number of studies as a benchmark to investigate the performance of socio-technical systems, especially those that are characterized by relevant cognitive work. However, there is little guidance on how to assess, systematically, the extent to which a system is complex. The main objective of this study is to carry out a systematic analysis of a SAMU (Mobile Emergency Medical Service) Medical Regulation Center in Brazil, based on the core characteristics of complex systems presented by previous studies. The assessment was based on direct observations and nine interviews: three of them with regulator of emergencies medical doctor, three with radio operators and three with telephone attendants. The results indicated that, to a great extent, the core characteristics of complexity are magnified) due to basic shortcomings in the design of the work system. Thus, some recommendations are put forward with a view to reducing unnecessary complexity that hinders the performance of the socio-technical system.
Robotic pancreaticoduodenectomy in a case of duodenal gastrointestinal stromal tumor.
Parisi, Amilcare; Desiderio, Jacopo; Trastulli, Stefano; Grassi, Veronica; Ricci, Francesco; Farinacci, Federico; Cacurri, Alban; Castellani, Elisa; Corsi, Alessia; Renzi, Claudio; Barberini, Francesco; D'Andrea, Vito; Santoro, Alberto; Cirocchi, Roberto
2014-12-04
Laparoscopic pancreaticoduodenectomy is rarely performed, and it has not been particularly successful due to its technical complexity. The objective of this study is to highlight how robotic surgery could improve a minimally invasive approach and to expose the usefulness of robotic surgery even in complex surgical procedures. The surgical technique employed in our center to perform a pancreaticoduodenectomy, which was by means of the da Vinci™ robotic system in order to remove a duodenal gastrointestinal stromal tumor, is reported. Robotic technology has improved significantly over the traditional laparoscopic approach, representing an evolution of minimally invasive techniques, allowing procedures to be safely performed that are still considered to be scarcely feasible or reproducible.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
Image space subdivision for fast ray tracing
NASA Astrophysics Data System (ADS)
Yu, Billy T.; Yu, William W.
1999-09-01
Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.
Culture & Cognition Laboratory
2011-05-01
life: Real world social-interaction cooperative tasks are inherently unequal in difficulty. Re-scoring performance on unequal tasks in order to enable...real- world situations to which this model is intended to apply, it is possible for calls for help to not be heard, or for a potential help-provider to...not have clear, well-defined objectives. Since many complex real- worlds tasks are not well-defined, defining a realistic objective can be considered a
Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping
NASA Astrophysics Data System (ADS)
Ignakov, Dmitri
A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Thermal control surfaces experiment flight system performance
NASA Technical Reports Server (NTRS)
Wilkes, Donald R.; Hummer, Leigh L.; Zwiener, James M.
1991-01-01
The Thermal Control Surfaces Experiment (TCSE) is the most complex system, other than the LDEF, retrieved after long term space exposure. The TCSE is a microcosm of complex electro-optical payloads being developed and flow by NASA and the DoD including SDI. The objective of TCSE was to determine the effects of the near-Earth orbital environment and the LDEF induced environment on spacecraft thermal control surfaces. The TCSE was a comprehensive experiment that combined in-space measurements with extensive post flight analyses of thermal control surfaces to determine the effects of exposure to the low earth orbit space environment. The TCSE was the first space experiment to measure the optical properties of thermal control surfaces the way they are routinely measured in a lab. The performance of the TCSE confirms that low cost, complex experiment packages can be developed that perform well in space.
Anti-Emetic Drug Effects on Pilot Performance, Phase 2: Simulation Test.
1996-04-01
The objectives of this study were to evaluate the effects of two anti-emetic drugs, granisetron (2 mg oral dose) and ondansetron (8 mg oral dose), on...and preduce no cognitive, psychomotor or subjective state changes. In this study, there was no evidence of performance degradation caused by either granisetron or ondansetron when tested in a complex military task environment.
ERIC Educational Resources Information Center
Holzinger, Andreas; Kickmeier-Rust, Michael D.; Wassertheurer, Sigi; Hessinger, Michael
2009-01-01
Objective: Since simulations are often accepted uncritically, with excessive emphasis being placed on technological sophistication at the expense of underlying psychological and educational theories, we evaluated the learning performance of simulation software, in order to gain insight into the proper use of simulations for application in medical…
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
OB3D, a new set of 3D objects available for research: a web-based study
Buffat, Stéphane; Chastres, Véronique; Bichot, Alain; Rider, Delphine; Benmussa, Frédéric; Lorenceau, Jean
2014-01-01
Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc. PMID:25339920
Robust multiperson tracking from a mobile platform.
Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc
2009-10-01
In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.
Hysong, Sylvia J; Thomas, Candice L; Spitzmüller, Christiane; Amspoker, Amber B; Woodard, LeChauncy; Modi, Varsha; Naik, Aanand D
2016-01-15
Team coordination within clinical care settings is a critical component of effective patient care. Less is known about the extent, effectiveness, and impact of coordination activities among professionals within VA Patient-Aligned Care Teams (PACTs). This study will address these gaps by describing the specific, fundamental tasks and practices involved in PACT coordination, their impact on performance measures, and the role of coordination task complexity. First, we will use a web-based survey of coordination practices among 1600 PACTs in the national VHA. Survey findings will characterize PACT coordination practices and assess their association with clinical performance measures. Functional job analysis, using 6-8 subject matter experts who are 3rd and 4th year residents in VA Primary Care rotations, will be utilized to identify the tasks involved in completing clinical performance measures to standard. From this, expert ratings of coordination complexity will be used to determine the level of coordinative complexity required for each of the clinical performance measures drawn from the VA External Peer Review Program (EPRP). For objective 3, data collected from the first two methods will evaluate the effect of clinical complexity on the relationships between measures of PACT coordination and their ratings on the clinical performance measures. Results from this study will support successful implementation of coordinated team-based work in clinical settings by providing knowledge regarding which aspects of care require the most complex levels of coordination and how specific coordination practices impact clinical performance.
Java Performance for Scientific Applications on LLNL Computer Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapfer, C; Wissink, A
2002-05-10
Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less
Koen, Joshua D; Borders, Alyssa A; Petzold, Michael T; Yonelinas, Andrew P
2017-02-01
The medial temporal lobe (MTL) plays a critical role in episodic long-term memory, but whether the MTL is necessary for visual short-term memory is controversial. Some studies have indicated that MTL damage disrupts visual short-term memory performance whereas other studies have failed to find such evidence. To account for these mixed results, it has been proposed that the hippocampus is critical in supporting short-term memory for high resolution complex bindings, while the cortex is sufficient to support simple, low resolution bindings. This hypothesis was tested in the current study by assessing visual short-term memory in patients with damage to the MTL and controls for high resolution and low resolution object-location and object-color associations. In the location tests, participants encoded sets of two or four objects in different locations on the screen. After each set, participants performed a two-alternative forced-choice task in which they were required to discriminate the object in the target location from the object in a high or low resolution lure location (i.e., the object locations were very close or far away from the target location, respectively). Similarly, in the color tests, participants were presented with sets of two or four objects in a different color and, after each set, were required to discriminate the object in the target color from the object in a high or low resolution lure color (i.e., the lure color was very similar or very different, respectively, to the studied color). The patients were significantly impaired in visual short-term memory, but importantly, they were more impaired for high resolution object-location and object-color bindings. The results are consistent with the proposal that the hippocampus plays a critical role in forming and maintaining complex, high resolution bindings. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
How children aged 2;6 tailor verbal expressions to interlocutor informational needs.
Abbot-Smith, Kirsten; Nurmsoo, Erika; Croll, Rebecca; Ferguson, Heather; Forrester, Michael
2016-11-01
Although preschoolers are pervasively underinformative in their actual usage of verbal reference, a number of studies have shown that they nonetheless demonstrate sensitivity to listener informational needs, at least when environmental cues to this are obvious. We investigated two issues. The first concerned the types of visual cues to interlocutor informational needs which children aged 2;6 can process whilst producing complex referring expressions. The second was whether performance in experimental tasks related to naturalistic conversational proficiency. We found that 2;6-year-olds used fewer complex expressions when the objects were dissimilar compared to highly similar objects, indicating that they tailor their verbal expressions to the informational needs of another person, even when the cue to the informational need is relatively opaque. We also found a correlation between conversational skills as rated by the parents and the degree to which 2;6-year-olds could learn from feedback to produce complex referring expressions.
Complexity analysis of dual-channel game model with different managers' business objectives
NASA Astrophysics Data System (ADS)
Li, Ting; Ma, Junhai
2015-01-01
This paper considers dual-channel game model with bounded rationality, using the theory of bifurcations of dynamical system. The business objectives of retailers are assumed to be different, which is closer to reality than previous studies. We study the local stable region of Nash equilibrium point and find that business objectives can expand the stable region and play an important role in price strategy. One interesting finding is that a fiercer competition tends to stabilize the Nash equilibrium. Simulation shows the complex behavior of two dimensional dynamic system, we find period doubling bifurcation and chaos phenomenon. We measure performances of the model in different period by using the index of average profit. The results show that unstable behavior in economic system is often an unfavorable outcome. So this paper discusses the application of adaptive adjustment mechanism when the model exhibits chaotic behavior and then allows the retailers to eliminate the negative effects.
EMU Suit Performance Simulation
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Benson, Elizabeth; Harvill, Lauren; Rajulu, Sudhakar
2014-01-01
Introduction: Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for research and development are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques that focus on a human-centric design paradigm. These new techniques make use of virtual prototype simulations and fully adjustable physical prototypes of suit hardware. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process. Objectives: The primary objective was to test modern simulation techniques for evaluating the human performance component of two EMU suit concepts, pivoted and planar style hard upper torso (HUT). Methods: This project simulated variations in EVA suit shoulder joint design and subject anthropometry and then measured the differences in shoulder mobility caused by the modifications. These estimations were compared to human-in-the-loop test data gathered during past suited testing using four subjects (two large males, two small females). Results: Results demonstrated that EVA suit modeling and simulation are feasible design tools for evaluating and optimizing suit design based on simulated performance. The suit simulation model was found to be advantageous in its ability to visually represent complex motions and volumetric reach zones in three dimensions, giving designers a faster and deeper comprehension of suit component performance vs. human performance. Suit models were able to discern differing movement capabilities between EMU HUT configurations, generic suit fit concerns, and specific suit fit concerns for crewmembers based on individual anthropometry
Gerstle, Melissa; Beebe, Dean W.; Drotar, Dennis; Cassedy, Amy; Marino, Bradley S.
2016-01-01
Objective To investigate the presence and severity of real-world impairments in executive functioning– responsible for children’s regulatory skills (metacognition, behavioral regulation) – and its potential impact on school performance among pediatric survivors of complex congenital heart disease (CHD). Study design Survivors of complex CHD aged 8–16 years (n=143)and their parents/guardians from a regional CHD survivor registry participated (81% participation rate). Parents completed proxy measures of executive functioning, school competency, and school-related quality of life (QOL). Patients also completed a measure of school QOL and underwent IQ testing. Patients were categorized into two groups based on heart lesion complexity: two-ventricle or single-ventricle. Results Survivors of complex CHD performed significantly worse than norms for executive functioning, IQ, school competency, and school QOL. Metacognition was more severely affected than behavioral regulation, and metacognitive deficits were more often present in older children. Even after taking into account demographic factors, disease severity, and IQ, metacognition uniquely and strongly predicted poorer school performance. In exploratory analyses, patients with single-ventricle lesions were rated as having lower school competency and school QOL, and patients with two-ventricle lesions were rated as having poorer behavioral regulation. Conclusions Survivors of complex CHD experience greater executive functioning difficulties than healthy peers, with metacognition particularly impacted and particularly relevant for day-to-day school performance. Especially in older children, clinicians should watch for metacognitive deficits, such as problems with organization, planning, self-monitoring, and follow-through on tasks. PMID:26875011
Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?
Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni
2015-09-01
The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Gamifying Video Object Segmentation.
Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela
2017-10-01
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.
A deep learning approach for fetal QRS complex detection.
Zhong, Wei; Liao, Lijuan; Guo, Xuemei; Wang, Guoli
2018-04-20
Non-invasive foetal electrocardiography (NI-FECG) has the potential to provide more additional clinical information for detecting and diagnosing fetal diseases. We propose and demonstrate a deep learning approach for fetal QRS complex detection from raw NI-FECG signals by using a convolutional neural network (CNN) model. The main objective is to investigate whether reliable fetal QRS complex detection performance can still be obtained from features of single-channel NI-FECG signals, without canceling maternal ECG (MECG) signals. A deep learning method is proposed for recognizing fetal QRS complexes. Firstly, we collect data from set-a of the PhysioNet/computing in Cardiology Challenge database. The sample entropy method is used for signal quality assessment. Part of the bad quality signals is excluded in the further analysis. Secondly, in the proposed method, the features of raw NI-FECG signals are normalized before they are fed to a CNN classifier to perform fetal QRS complex detection. We use precision, recall, F-measure and accuracy as the evaluation metrics to assess the performance of fetal QRS complex detection. The proposed deep learning method can achieve relatively high precision (75.33%), recall (80.54%), and F-measure scores (77.85%) compared with three other well-known pattern classification methods, namely KNN, naive Bayes and SVM. the proposed deep learning method can attain reliable fetal QRS complex detection performance from the raw NI-FECG signals without canceling MECG signals. In addition, the influence of different activation functions and signal quality assessment on classification performance are evaluated, and results show that Relu outperforms the Sigmoid and Tanh on this particular task, and better classification performance is obtained with the signal quality assessment step in this study.
Prediction of Human Activity by Discovering Temporal Sequence Patterns.
Li, Kang; Fu, Yun
2014-08-01
Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
Least-squares luma-chroma demultiplexing algorithm for Bayer demosaicking.
Leung, Brian; Jeon, Gwanggil; Dubois, Eric
2011-07-01
This paper addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. A luma-chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB ∆E∗ objective quality measures and demosaicking speed. It was found to provide excellent performance and the best quality-speed tradeoff among the methods studied.
Alternate Waveforms for a Low-Cost Civil Global Positioning System Receiver
DOT National Transportation Integrated Search
1980-06-01
This report examines the technical feasibility of alternate waveforms to perform the GPS functions and to result in less complex receivers than is possible with the GPS C/A waveform. The approach taken to accomplish this objective is (a) to identify,...
Effects of complex aural stimuli on mental performance.
Vij, Mohit; Aghazadeh, Fereydoun; Ray, Thomas G; Hatipkarasulu, Selen
2003-06-01
The objective of this study is to investigate the effect of complex aural stimuli on mental performance. A series of experiments were designed to obtain data for two different analyses. The first analysis is a "Stimulus" versus "No-stimulus" comparison for each of the four dependent variables, i.e. quantitative ability, reasoning ability, spatial ability and memory of an individual, by comparing the control treatment with the rest of the treatments. The second set of analysis is a multi-variant analysis of variance for component level main effects and interactions. The two component factors are tempo of the complex aural stimuli and sound volume level, each administered at three discrete levels for all four dependent variables. Ten experiments were conducted on eleven subjects. It was found that complex aural stimuli influence the quantitative and spatial aspect of the mind, while the reasoning ability was unaffected by the stimuli. Although memory showed a trend to be worse with the presence of complex aural stimuli, the effect was statistically insignificant. Variation in tempo and sound volume level of an aural stimulus did not significantly affect the mental performance of an individual. The results of these experiments can be effectively used in designing work environments.
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure
2016-01-01
Background Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. Objective The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. Methods We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. Results We identified 5 high-level macrocognitive processes affecting medication management—sensemaking, planning, coordination, monitoring, and decision making—and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Conclusions Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation. PMID:27733331
DOE Office of Scientific and Technical Information (OSTI.GOV)
MYERS DA
This report documents the results of preliminary surface geophysical exploration activities performed between October and December 2006 at the B, BX, and BY tank farms (B Complex). The B Complex is located in the 200 East Area of the U. S. Department of Energy's Hanford Site in Washington State. The objective of the preliminary investigation was to collect background characterization information with magnetic gradiometry and electromagnetic induction to understand the spatial distribution of metallic objects that could potentially interfere with the results from high resolution resistivity survey. Results of the background characterization show there are several areas located around themore » site with large metallic subsurface debris or metallic infrastructure.« less
Sex and cultural differences in spatial performance between Japanese and North Americans.
Sakamoto, Maiko; Spiers, Mary V
2014-04-01
Previous studies have suggested that Asians perform better than North Americans on spatial tasks but show smaller sex differences. In this study, we evaluated the relationship between long-term experience with a pictorial written language and spatial performance. It was hypothesized that native Japanese Kanji (a complex pictorial written language) educated adults would show smaller sex differences on spatial tasks than Japanese Americans or North Americans without Kanji education. A total of 80 young healthy participants (20 native Japanese speakers, 20 Japanese Americans-non Japanese speaking, and 40 North Americans-non Japanese speaking) completed the Rey Complex Figure Test (RCFT), the Mental Rotations Test (MRT), and customized 2D and 3D spatial object location memory tests. As predicted, main effects revealed men performed better on the MRT and RCFT and women performed better on the spatial object location memory tests. Also, as predicted, native Japanese performed better on all tests than the other groups. In contrast to the other groups, native Japanese showed a decreased magnitude of sex differences on aspects of the RCFT (immediate and delayed recall) and no significant sex difference on the efficiency of the strategy used to copy and encode the RCFT figure. This study lends support to the idea that intensive experience over time with a pictorial written language (i.e., Japanese Kanji) may contribute to increased spatial performance on some spatial tasks as well as diminish sex differences in performance on tasks that most resemble Kanji.
An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.
Magen, Hagit
2017-03-01
Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.
Object-oriented Technology for Compressor Simulation
NASA Technical Reports Server (NTRS)
Drummond, C. K.; Follen, G. J.; Cannon, M. R.
1994-01-01
An object-oriented basis for interdisciplinary compressor simulation can, in principle, overcome several barriers associated with the traditional structured (procedural) development approach. This paper presents the results of a research effort with the objective to explore the repercussions on design, analysis, and implementation of a compressor model in an object oriented (OO) language, and to examine the ability of the OO system design to accommodate computational fluid dynamics (CFD) code for compressor performance prediction. Three fundamental results are that: (1) the selection of the object oriented language is not the central issue; enhanced (interdisciplinary) analysis capability derives from a broader focus on object-oriented technology; (2) object-oriented designs will produce more effective and reusable computer programs when the technology is applied to issues involving complex system inter-relationships (more so than when addressing the complex physics of an isolated discipline); and (3) the concept of disposable prototypes is effective for exploratory research programs, but this requires organizations to have a commensurate long-term perspective. This work also suggests that interdisciplinary simulation can be effectively accomplished (over several levels of fidelity) with a mixed language treatment (i.e., FORTRAN-C++), reinforcing the notion the OO technology implementation into simulations is a 'journey' in which the syntax can, by design, continuously evolve.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.
Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies
2016-01-01
During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.
Similarity, not complexity, determines visual working memory performance.
Jackson, Margaret C; Linden, David E J; Roberts, Mark V; Kriegeskorte, Nikolaus; Haenschel, Corinna
2015-11-01
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased perceptual similarity between complex items as a result of a large amount of overlapping information. Increased similarity is thought to lead to greater comparison errors between items encoded into WM and the test item(s) presented at retrieval. However, previous studies have used different object categories to manipulate complexity and similarity, raising questions as to whether these effects are simply due to cross-category differences. For the first time, here the relationship between complexity and similarity in WM using the same stimulus category (abstract polygons) are investigated. The authors used a delayed discrimination task to measure WM for 1-4 complex versus simple simultaneously presented items and manipulated the similarity between the single test item at retrieval and the sample items at encoding. WM was poorer for complex than simple items only when the test item was similar to 1 of the encoding items, and not when it was dissimilar or identical. The results provide clear support for reinterpretation of the complexity effect in WM as a similarity effect and highlight the importance of the retrieval stage in governing WM performance. The authors discuss how these findings can be reconciled with current models of WM capacity limits. (c) 2015 APA, all rights reserved).
Performance of Geno-Fuzzy Model on rainfall-runoff predictions in claypan watersheds
USDA-ARS?s Scientific Manuscript database
Fuzzy logic provides a relatively simple approach to simulate complex hydrological systems while accounting for the uncertainty of environmental variables. The objective of this study was to develop a fuzzy inference system (FIS) with genetic algorithm (GA) optimization for membership functions (MF...
Information and complexity measures for hydrologic model evaluation
USDA-ARS?s Scientific Manuscript database
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors
Liu, Zhong; Zhao, Changchen; Wu, Xingming; Chen, Weihai
2017-01-01
RGB-D sensors have been widely used in various areas of computer vision and graphics. A good descriptor will effectively improve the performance of operation. This article further analyzes the recognition performance of shape features extracted from multi-modality source data using RGB-D sensors. A hybrid shape descriptor is proposed as a representation of objects for recognition. We first extracted five 2D shape features from contour-based images and five 3D shape features over point cloud data to capture the global and local shape characteristics of an object. The recognition performance was tested for category recognition and instance recognition. Experimental results show that the proposed shape descriptor outperforms several common global-to-global shape descriptors and is comparable to some partial-to-global shape descriptors that achieved the best accuracies in category and instance recognition. Contribution of partial features and computational complexity were also analyzed. The results indicate that the proposed shape features are strong cues for object recognition and can be combined with other features to boost accuracy. PMID:28245553
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
Perceptual Learning of Object Shape
Golcu, Doruk; Gilbert, Charles D.
2009-01-01
Recognition of objects is accomplished through the use of cues that depend on internal representations of familiar shapes. We used a paradigm of perceptual learning during visual search to explore what features human observers use to identify objects. Human subjects were trained to search for a target object embedded in an array of distractors, until their performance improved from near-chance levels to over 80% of trials in an object specific manner. We determined the role of specific object components in the recognition of the object as a whole by measuring the transfer of learning from the trained object to other objects sharing components with it. Depending on the geometric relationship of the trained object with untrained objects, transfer to untrained objects was observed. Novel objects that shared a component with the trained object were identified at much higher levels than those that did not, and this could be used as an indicator of which features of the object were important for recognition. Training on an object also transferred to the components of the object when these components were embedded in an array of distractors of similar complexity. These results suggest that objects are not represented in a holistic manner during learning, but that their individual components are encoded. Transfer between objects was not complete, and occurred for more than one component, regardless of how well they distinguish the object from distractors. This suggests that a joint involvement of multiple components was necessary for full performance. PMID:19864574
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert
2014-05-01
Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2017-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
The role of shape complexity in the detection of closed contours.
Wilder, John; Feldman, Jacob; Singh, Manish
2016-09-01
The detection of contours in noise has been extensively studied, but the detection of closed contours, such as the boundaries of whole objects, has received relatively little attention. Closed contours pose substantial challenges not present in the simple (open) case, because they form the outlines of whole shapes and thus take on a range of potentially important configural properties. In this paper we consider the detection of closed contours in noise as a probabilistic decision problem. Previous work on open contours suggests that contour complexity, quantified as the negative log probability (Description Length, DL) of the contour under a suitably chosen statistical model, impairs contour detectability; more complex (statistically surprising) contours are harder to detect. In this study we extended this result to closed contours, developing a suitable probabilistic model of whole shapes that gives rise to several distinct though interrelated measures of shape complexity. We asked subjects to detect either natural shapes (Exp. 1) or experimentally manipulated shapes (Exp. 2) embedded in noise fields. We found systematic effects of global shape complexity on detection performance, demonstrating how aspects of global shape and form influence the basic process of object detection. Copyright © 2015 Elsevier Ltd. All rights reserved.
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
Reengineering the JPL Spacecraft Design Process
NASA Technical Reports Server (NTRS)
Briggs, C.
1995-01-01
This presentation describes the factors that have emerged in the evolved process of reengineering the unmanned spacecraft design process at the Jet Propulsion Laboratory in Pasadena, California. Topics discussed include: New facilities, new design factors, new system-level tools, complex performance objectives, changing behaviors, design integration, leadership styles, and optimization.
Complex Sentence Profiles in Children with Specific Language Impairment: Are They Really Atypical?
ERIC Educational Resources Information Center
Riches, Nick G.
2017-01-01
Children with Specific Language Impairment (SLI) have language difficulties of unknown origin. Syntactic profiles are atypical, with poor performance on non-canonical structures, e.g. object relatives, suggesting a localized deficit. However, existing analyses using ANOVAs are problematic because they do not systematically address unequal…
Humane Education: Resource Guide. A Guide for Elementary School Teachers.
ERIC Educational Resources Information Center
New York City Board of Education, Brooklyn, NY. Div. of Curriculum and Instruction.
Humane education promotes responsible behavior and improves the quality of life for animals and humans. Teaching the humane treatment of animals is a complex, philosophical, and values-oriented subject. Lessons for each grade level have performance objectives, materials, and activities. Student activity sheets are provided for follow-up…
Competency Tests and Graduation Requirements. Second Edition.
ERIC Educational Resources Information Center
Keefe, James W.
Interest in applied performance testing and concern about the quality of the high school diploma are finding a common ground: graduation requirements. A competency is a complex capability applicable in real life situations, and can be used as program objectives in a competency-based, criterion-referenced program. In such a program, applied…
Effects of Noun Phrase Type on Sentence Complexity
ERIC Educational Resources Information Center
Gordon, Peter C.; Hendrick, Randall; Johnson, Marcus
2004-01-01
A series of self-paced reading time experiments was performed to assess how characteristics of noun phrases (NPs) contribute to the difference in processing difficulty between object- and subject-extracted relative clauses. Structural semantic characteristics of the NP in the embedded clause (definite vs. indefinite and definite vs. generic) did…
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
Informational analysis involving application of complex information system
NASA Astrophysics Data System (ADS)
Ciupak, Clébia; Vanti, Adolfo Alberto; Balloni, Antonio José; Espin, Rafael
The aim of the present research is performing an informal analysis for internal audit involving the application of complex information system based on fuzzy logic. The same has been applied in internal audit involving the integration of the accounting field into the information systems field. The technological advancements can provide improvements to the work performed by the internal audit. Thus we aim to find, in the complex information systems, priorities for the work of internal audit of a high importance Private Institution of Higher Education. The applied method is quali-quantitative, as from the definition of strategic linguistic variables it was possible to transform them into quantitative with the matrix intersection. By means of a case study, where data were collected via interview with the Administrative Pro-Rector, who takes part at the elaboration of the strategic planning of the institution, it was possible to infer analysis concerning points which must be prioritized at the internal audit work. We emphasize that the priorities were identified when processed in a system (of academic use). From the study we can conclude that, starting from these information systems, audit can identify priorities on its work program. Along with plans and strategic objectives of the enterprise, the internal auditor can define operational procedures to work in favor of the attainment of the objectives of the organization.
Enhancing cognition with video games: a multiple game training study.
Oei, Adam C; Patterson, Michael D
2013-01-01
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
Chang, Chuan-Hui; Chiao, Yu-Ching; Tsai, Yafang
2017-11-21
This study is based on competitive dynamics theory, and discusses competitive actions (including their implementation requirements, strategic orientation, and action complexity) that influence hospitals' performance, while also meeting the requirements of Taiwan's "global budget" insurance payment policy. In order to investigate the possible actions of hospitals, the study was conducted in two stages. The first stage investigated the actions of hospitals from March 1 to May 31, 2009. Semi-structured questionnaires were used, which included in-depth interviews with senior supervisors of 10 medium- and large-scale hospitals in central Taiwan. This stage collected data related to the types of actions adopted by the hospitals in previous years. The second stage was based on the data collected from the first stage and on developed questionnaires, which were distributed from June 29 to November 1, 2009. The questionnaires were given to 20 superintendents, deputy superintendents, and supervisors responsible for the management of a hospital, and focused on medical centers and regional hospitals in central Taiwan in order to determine the types and number of competitive actions. First, the strategic orientation of an action has a significantly positive influence on subjective performance. Second, action complexity has a significantly positive influence on the subjective and the objective performance of a hospital. Third, the implementation requirements of actions do not have a significantly positive impact on the subjective or the objective performance of a hospital. Managers facing a competitive healthcare environment should adopt competitive strategies to improve the performance of the hospital.
Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz A.
1985-01-01
A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
QCDLoop: A comprehensive framework for one-loop scalar integrals
NASA Astrophysics Data System (ADS)
Carrazza, Stefano; Ellis, R. Keith; Zanderighi, Giulia
2016-12-01
We present a new release of the QCDLoop library based on a modern object-oriented framework. We discuss the available new features such as the extension to the complex masses, the possibility to perform computations in double and quadruple precision simultaneously, and useful caching mechanisms to improve the computational speed. We benchmark the performance of the new library, and provide practical examples of phenomenological implementations by interfacing this new library to Monte Carlo programs.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Raised visual detection thresholds depend on the level of complexity of cognitive foveal loading.
Plainis, S; Murray, I J; Chauhan, K
2001-01-01
The objective of the study was to measure the interactions between visual thresholds for a simple light (the secondary task) presented peripherally and a simultaneously performed cognitive task (the primary task) presented foveally The primary task was highly visible but varied according to its cognitive complexity. Interactions between the tasks were determined by measuring detection thresholds for the peripheral task and accuracy of performance of the foveal task. Effects were measured for 5, 10, 20, and 30 deg eccentricity of the peripherally presented light and for three levels of cognitive complexity. Mesopic conditions (0.5 lx) were used. As expected, the concurrent presentation of the foveal cognitive task reduced peripheral sensitivity. Moreover, performance of the foveal task was adversely affected when conducting the peripheral task. Performance on both tasks was reduced as the level of complexity of the cognitive task increased. There were qualitative differences in task interactions between the central 10 deg and at greater eccentricities. Within 10 deg there was a disproportionate effect of eccentricity, previously interpreted as the 'tunnel-vision' model of visual field narrowing. Interactions outside 10 deg were less affected by eccentricity. These results are discussed in terms of the known neurophysiological characteristics of the primary visual pathway.
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
Nursing care complexity in a psychiatric setting: results of an observational study.
Petrucci, C; Marcucci, G; Carpico, A; Lancia, L
2014-02-01
For nurses working in mental health service settings, it is a priority to perform patient assessments to identify patients' general and behavioural risks and nursing care complexity using objective criteria, to meet the demand for care and to improve the quality of service by reducing health threat conditions to the patients' selves or to others (adverse events). This study highlights that there is a relationship between the complexity of psychiatric patient care, which was assigned a numerical value after the nursing assessment, and the occurrence of psychiatric adverse events in the recent histories of the patients. The results suggest that nursing supervision should be enhanced for patients with high care complexity scores. © 2013 John Wiley & Sons Ltd.
The artist's advantage: Better integration of object information across eye movements
Perdreau, Florian; Cavanagh, Patrick
2013-01-01
Over their careers, figurative artists spend thousands of hours analyzing objects and scene layout. We examined what impact this extensive training has on the ability to encode complex scenes, comparing participants with a wide range of training and drawing skills on a possible versus impossible objects task. We used a gaze-contingent display to control the amount of information the participants could sample on each fixation either from central or peripheral visual field. Test objects were displayed and participants reported, as quickly as possible, whether the object was structurally possible or not. Our results show that when viewing the image through a small central window, performance improved with the years of training, and to a lesser extent with the level of skill. This suggests that the extensive training itself confers an advantage for integrating object structure into more robust object descriptions. PMID:24349697
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1999-01-01
The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.
Müller, Corsin A; Riemer, Stefanie; Virányi, Zsófia; Huber, Ludwig; Range, Friederike
2016-01-01
Human infants develop an understanding of their physical environment through playful interactions with objects. Similar processes may influence also the performance of non-human animals in physical problem-solving tasks, but to date there is little empirical data to evaluate this hypothesis. In addition or alternatively to prior experiences, inhibitory control has been suggested as a factor underlying the considerable individual differences in performance reported for many species. Here we report a study in which we manipulated the extent of object-related experience for a cohort of dogs (Canis familiaris) of the breed Border Collie over a period of 18 months, and assessed their level of inhibitory control, prior to testing them in a series of four physical problem-solving tasks. We found no evidence that differences in object-related experience explain variability in performance in these tasks. It thus appears that dogs do not transfer knowledge about physical rules from one physical problem-solving task to another, but rather approach each task as a novel problem. Our results, however, suggest that individual performance in these tasks is influenced in a complex way by the subject's level of inhibitory control. Depending on the task, inhibitory control had a positive or a negative effect on performance and different aspects of inhibitory control turned out to be the best predictors of individual performance in the different tasks. Therefore, studying the interplay between inhibitory control and problem-solving performance will make an important contribution to our understanding of individual and species differences in physical problem-solving performance.
Müller, Corsin A.; Riemer, Stefanie; Virányi, Zsófia; Huber, Ludwig; Range, Friederike
2016-01-01
Human infants develop an understanding of their physical environment through playful interactions with objects. Similar processes may influence also the performance of non-human animals in physical problem-solving tasks, but to date there is little empirical data to evaluate this hypothesis. In addition or alternatively to prior experiences, inhibitory control has been suggested as a factor underlying the considerable individual differences in performance reported for many species. Here we report a study in which we manipulated the extent of object-related experience for a cohort of dogs (Canis familiaris) of the breed Border Collie over a period of 18 months, and assessed their level of inhibitory control, prior to testing them in a series of four physical problem-solving tasks. We found no evidence that differences in object-related experience explain variability in performance in these tasks. It thus appears that dogs do not transfer knowledge about physical rules from one physical problem-solving task to another, but rather approach each task as a novel problem. Our results, however, suggest that individual performance in these tasks is influenced in a complex way by the subject’s level of inhibitory control. Depending on the task, inhibitory control had a positive or a negative effect on performance and different aspects of inhibitory control turned out to be the best predictors of individual performance in the different tasks. Therefore, studying the interplay between inhibitory control and problem-solving performance will make an important contribution to our understanding of individual and species differences in physical problem-solving performance. PMID:26863141
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
ERIC Educational Resources Information Center
Campo, Ana E.; Williams, Virginia; Williams, Redford B.; Segundo, Marisol A.; Lydston, David; Weiss, Stephen M.'
2008-01-01
Objective: Sound clinical judgment is the cornerstone of medical practice and begins early during medical education. The authors consider the effect of personality characteristics (hostility, anger, cynicism) on clinical judgment and whether a brief intervention can affect this process. Methods: Two sophomore medical classes (experimental,…
Do Grades Tell Parents What They Want and Need to Know?
ERIC Educational Resources Information Center
Webber, Jim; Wilson, Maja
2012-01-01
Teachers' objections to an emphasis on narrative, descriptive evaluation and a de-emphasis on grades cannot rest on uninformed claims about what parents want. Decades of research show that grades don't lead to deeper understandings, increased intellectual risk-taking, or better performance on complex tasks. Similarly, conversations based around…
Effects of Communication Competence and Social Network Centralities on Learner Performance
ERIC Educational Resources Information Center
Jo, Il-Hyun; Kang, Stephanie; Yoon, Meehyun
2014-01-01
Collaborative learning has become a dominant learning apparatus for higher level learning objectives. Much of the psychological and social mechanisms operating under this complex group activity, however, is not yet well understood. The purpose of this study was to investigate the effects of college students' communication competence and degree…
Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio
2011-11-01
We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.
Object-oriented Approach to High-level Network Monitoring and Management
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2000-01-01
An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.
Visual short-term memory capacity for simple and complex objects.
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-03-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not related to storage limitations of VSTM, per se. We used ERPs to track neuronal activity specifically related to retention in VSTM by measuring the sustained posterior contralateral negativity during a change detection task (which required detecting if an item was changed between a memory and a test array). The sustained posterior contralateral negativity, during the retention interval, was larger for complex objects than for simple objects, suggesting that neurons mediating VSTM needed to work harder to maintain more complex objects. This, in turn, is consistent with the view that VSTM capacity depends on complexity.
Crew workload-management strategies - A critical factor in system performance
NASA Technical Reports Server (NTRS)
Hart, Sandra G.
1989-01-01
This paper reviews the philosophy and goals of the NASA/USAF Strategic Behavior/Workload Management Program. The philosophical foundation of the program is based on the assumption that an improved understanding of pilot strategies will clarify the complex and inconsistent relationships observed among objective task demands and measures of system performance and pilot workload. The goals are to: (1) develop operationally relevant figures of merit for performance, (2) quantify the effects of strategic behaviors on system performance and pilot workload, (3) identify evaluation criteria for workload measures, and (4) develop methods of improving pilots' abilities to manage workload extremes.
Quantifying the cognitive cost of laparo-endoscopic single-site surgeries: Gaze-based indices.
Di Stasi, Leandro L; Díaz-Piedra, Carolina; Ruiz-Rabelo, Juan Francisco; Rieiro, Héctor; Sanchez Carrion, Jose M; Catena, Andrés
2017-11-01
Despite the growing interest concerning the laparo-endoscopic single-site surgery (LESS) procedure, LESS presents multiple difficulties and challenges that are likely to increase the surgeon's cognitive cost, in terms of both cognitive load and performance. Nevertheless, there is currently no objective index capable of assessing the surgeon cognitive cost while performing LESS. We assessed if gaze-based indices might offer unique and unbiased measures to quantify LESS complexity and its cognitive cost. We expect that the assessment of surgeon's cognitive cost to improve patient safety by measuring fitness-for-duty and reducing surgeons overload. Using a wearable eye tracker device, we measured gaze entropy and velocity of surgical trainees and attending surgeons during two surgical procedures (LESS vs. multiport laparoscopy surgery [MPS]). None of the participants had previous experience with LESS. They performed two exercises with different complexity levels (Low: Pattern Cut vs. High: Peg Transfer). We also collected performance and subjective data. LESS caused higher cognitive demand than MPS, as indicated by increased gaze entropy in both surgical trainees and attending surgeons (exploration pattern became more random). Furthermore, gaze velocity was higher (exploration pattern became more rapid) for the LESS procedure independently of the surgeon's expertise. Perceived task complexity and laparoscopic accuracy confirmed gaze-based results. Gaze-based indices have great potential as objective and non-intrusive measures to assess surgeons' cognitive cost and fitness-for-duty. Furthermore, gaze-based indices might play a relevant role in defining future guidelines on surgeons' examinations to mark their achievements during the entire training (e.g. analyzing surgical learning curves). Copyright © 2017 Elsevier Ltd. All rights reserved.
Groups of adjacent contour segments for object detection.
Ferrari, V; Fevrier, L; Jurie, F; Schmid, C
2008-01-01
We present a family of scale-invariant local shape features formed by chains of k connected, roughly straight contour segments (kAS), and their use for object class detection. kAS are able to cleanly encode pure fragments of an object boundary, without including nearby clutter. Moreover, they offer an attractive compromise between information content and repeatability, and encompass a wide variety of local shape structures. We also define a translation and scale invariant descriptor encoding the geometric configuration of the segments within a kAS, making kAS easy to reuse in other frameworks, for example as a replacement or addition to interest points. Software for detecting and describing kAS is released on lear.inrialpes.fr/software. We demonstrate the high performance of kAS within a simple but powerful sliding-window object detection scheme. Through extensive evaluations, involving eight diverse object classes and more than 1400 images, we 1) study the evolution of performance as the degree of feature complexity k varies and determine the best degree; 2) show that kAS substantially outperform interest points for detecting shape-based classes; 3) compare our object detector to the recent, state-of-the-art system by Dalal and Triggs [4].
Performance of office workers under various enclosure conditions in state-of-the-art open workplaces
NASA Astrophysics Data System (ADS)
Yoon, Heakyung Cecilia
The objective of this thesis is to more firmly establish the importance of physical attributes of workstations on the performance of workers undertaking a range of complex tasks while subjected to the visual and noise distractions prevalent in state-of-the-art North American office settings. This study investigates objective and subjective evaluations of noise and performance given a range of current physical work environments. The study provides criteria for architects, interior designers and managers, to select distraction-free office environments to deliver better performance. The concluding chapter helps to establish the importance of designing more acoustically responsible work settings in state-of-the-art office projects. With 102 subjects (23 native speakers of English per each of three workstation types), controlled experiments were completed over a six month testing period in three different work settings---four foot partitions on two sides, seated privacy with six foot partitions on three sides, and a closed office with eight foot partitions, a door and a ceiling, with two acoustic environments (office sounds with and without speech at a controlled 45 dBA level at the receiver), the experimental results were statistically significant. Another finding was the lack of a significant effect of background sound variations on simple or complex task performance. That implies the current acoustical evaluation tool, the Articulation Index, may not be an appropriate tool to adequately and conclusively assess the acoustic impact of open workplaces on individual performance. Concerning the impact of acoustic conditions on occupant performance from the experiments, Articulation Index values do not reflect the potential relation of workstation designs and subjects' performance and moods. However, NIC connected with speech privacy rating has the potential to be a better evaluation tool than AI for open workplaces. From the results of this thesis, it is predicted that fully enclosed workstations will improve the individual performance of knowledge workers whose main tasks are complex, as well as improve the moods of occupants towards collaborations with their co-workers.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
Performance Evaluation of Communication Software Systems for Distributed Computing
NASA Technical Reports Server (NTRS)
Fatoohi, Rod
1996-01-01
In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.
Spectral Characteristics of Young Stars Associated with the Sh2-296 Nebula
NASA Astrophysics Data System (ADS)
Fernandes, Beatriz; Gregorio-Hetem, Jane
Aiming to contribute to the understanding of star formation and evolution in the Canis Major (CMa R1) Molecular Clouds Complex, we analyze the spectral characteristics of a population of young stars associated with the arc-shaped nebula Sh2-296. Our XMM/Newton observations detected 109 X-ray sources in the region and optical spectroscopy was performed with Gemini telescope for 85 optical counterparts. We identified and characterized 51 objects that present features typically found in young objects, such as Hα emission and strong absorption on the Li I line.
Parallel Flux Tensor Analysis for Efficient Moving Object Detection
2011-07-01
computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as
Fire hazard analysis for Plutonium Finishing Plant complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
MCKINNIS, D.L.
1999-02-23
A fire hazards analysis (FHA) was performed for the Plutonium Finishing Plant (PFP) Complex at the Department of Energy (DOE) Hanford site. The scope of the FHA focuses on the nuclear facilities/structures in the Complex. The analysis was conducted in accordance with RLID 5480.7, [DOE Directive RLID 5480.7, 1/17/94] and DOE Order 5480.7A, ''Fire Protection'' [DOE Order 5480.7A, 2/17/93] and addresses each of the sixteen principle elements outlined in paragraph 9.a(3) of the Order. The elements are addressed in terms of the fire protection objectives stated in paragraph 4 of DOE 5480.7A. In addition, the FHA also complies with WHC-CM-4-41,more » Fire Protection Program Manual, Section 3.4 [1994] and WHC-SD-GN-FHA-30001, Rev. 0 [WHC, 1994]. Objectives of the FHA are to determine: (1) the fire hazards that expose the PFP facilities, or that are inherent in the building operations, (2) the adequacy of the fire safety features currently located in the PFP Complex, and (3) the degree of compliance of the facility with specific fire safety provisions in DOE orders, related engineering codes, and standards.« less
Experiments in cooperative-arm object manipulation with a two-armed free-flying robot. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Koningstein, Ross
1990-01-01
Developing computed-torque controllers for complex manipulator systems using current techniques and tools is difficult because they address the issues pertinent to simulation, as opposed to control. A new formulation of computed-torque (CT) control that leads to an automated computer-torque robot controller program is presented. This automated tool is used for simulations and experimental demonstrations of endpoint and object control from a free-flying robot. A new computed-torque formulation states the multibody control problem in an elegant, homogeneous, and practical form. A recursive dynamics algorithm is presented that numerically evaluates kinematics and dynamics terms for multibody systems given a topological description. Manipulators may be free-flying, and may have closed-chain constraints. With the exception of object squeeze-force control, the algorithm does not deal with actuator redundancy. The algorithm is used to implement an automated 2D computed-torque dynamics and control package that allows joint, endpoint, orientation, momentum, and object squeeze-force control. This package obviates the need for hand-derivation of kinematics and dynamics, and is used for both simulation and experimental control. Endpoint control experiments are performed on a laboratory robot that has two arms to manipulate payloads, and uses an air bearing to achieve very-low drag characteristics. Simulations and experimental data for endpoint and object controllers are presented for the experimental robot - a complex dynamic system. There is a certain rather wide set of conditions under which CT endpoint controllers can neglect robot base accelerations (but not motions) and achieve comparable performance including base accelerations in the model. The regime over which this simplification holds is explored by simulation and experiment.
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J.; Dillon, Harvey
2016-01-01
Objective Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Design Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localisation in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Study sample Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Results Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Conclusions Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable. PMID:26140298
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Mickelson, Jennifer J; Macneily, Andrew E; Samarasekera, Dinesh; Beiko, Darren; Afshar, Kourosh
2008-06-01
We aimed to clarify the scope of pediatric urological procedures that Canadian urology residents are perceived to be competent to perform upon graduation. We conducted a survey from April 2005 to June 2006 of urology residency program directors (UPDs), senior urology residents (SURs) and Pediatric Urologists of Canada (PUC) members from all 12 Canadian training programs. Questions focused on which of 23 pediatric urological procedures the 3 study groups perceived urology residents would be competent to perform upon completion of residency without further fellowship training. Procedures were based on the "A," "B" and "C" lists of procedures (least complex to most complex) as outlined in the Royal College of Physicians and Surgeons of Canada Objectives of Training in Urology. Response rates were 12/12 (100%), 41/53 (77%) and 17/23 (74%) for UPDs, SURs and PUC members, respectively. Average exposure to pediatric urology during residency was 5.4 (range 3-9) months and considered sufficient by 75% of UPDs and 69% of SURs, but only 41% of PUC members (p = 0.05). Overall, the 3 groups disagreed on the level of competence for performing level "A" and "B" procedures, with significant disagreement between PUC members and UPDs as well as SURs (p < 0.005). PUC members perceive Canadian urology residents' exposure to pediatric urology as insufficient and their competence for procedures of low to moderate complexity as inadequate. Further investigation regarding exposure to and competence in other emerging subspecialty spheres of urology may be warranted. Ongoing assessment of the objectives for training in pediatric urology is required.
Automatic QRS complex detection using two-level convolutional neural network.
Xiang, Yande; Lin, Zhitao; Meng, Jianyi
2018-01-29
The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.
Preserved Haptic Shape Processing after Bilateral LOC Lesions.
Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C
2015-10-07
The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove
2016-01-01
The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
The Effects of Text Analysis on Drafting and Justifying Research Questions
ERIC Educational Resources Information Center
Padilla, Maria Antonia; Solorzano, Wendy Guadalupe; Pacheco, Virginia
2009-01-01
Introduction: A correspondence has been seen between the level at which one can read scientific texts and his/her performance in writing this type of texts. Besides being able to read at the most complex levels, formulating research problems requires explicit training in writing. The objective of the present study was to evaluate whether…
ERIC Educational Resources Information Center
Alvarez, Nahum; Sanchez-Ruiz, Antonio; Cavazza, Marc; Shigematsu, Mika; Prendinger, Helmut
2015-01-01
The use of three-dimensional virtual environments in training applications supports the simulation of complex scenarios and realistic object behaviour. While these environments have the potential to provide an advanced training experience to students, it is difficult to design and manage a training session in real time due to the number of…
Dependence of behavioral performance on material category in an object grasping task with monkeys.
Yokoi, Isao; Tachibana, Atsumichi; Minamimoto, Takafumi; Goda, Naokazu; Komatsu, Hidehiko
2018-05-02
Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys' behavior in the grasping task, measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests employing real objects is important when studying behaviors related to material perception.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Park, Subok; Clarkson, Eric
2010-01-01
The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski et al. [J. Opt. Soc. Am. A 20, 430(2003) ] and by Park et al. [J. Opt. Soc. Am. A 24, B136 (2007) and IEEE Trans. Med. Imaging 28, 657 (2009) ] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO. PMID:19884916
Chetcuti, Lacey; Hudry, Kristelle; Grant, Megan; Vivanti, Giacomo
2017-11-01
We examined the role of social motivation and motor execution factors in object-directed imitation difficulties in autism spectrum disorder. A series of to-be-imitated actions was presented to 35 children with autism spectrum disorder and 20 typically developing children on an Apple ® iPad ® by a socially responsive or aloof model, under conditions of low and high motor demand. There were no differences in imitation performance (i.e. the number of actions reproduced within a fixed sequence), for either group, in response to a model who acted socially responsive or aloof. Children with autism spectrum disorder imitated the high motor demand task more poorly than the low motor demand task, while imitation performance for typically developing children was equivalent across the low and high motor demand conditions. Furthermore, imitative performance in the autism spectrum disorder group was unrelated to social reciprocity, though positively associated with fine motor coordination. These results suggest that difficulties in object-directed imitation in autism spectrum disorder are the result of motor execution difficulties, not reduced social motivation.
Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.
Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M
2016-12-01
Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
Knowledge base rule partitioning design for CLIPS
NASA Technical Reports Server (NTRS)
Mainardi, Joseph D.; Szatkowski, G. P.
1990-01-01
This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.
Grids in topographic maps reduce distortions in the recall of learned object locations.
Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank
2014-01-01
To date, it has been shown that cognitive map representations based on cartographic visualisations are systematically distorted. The grid is a traditional element of map graphics that has rarely been considered in research on perception-based spatial distortions. Grids do not only support the map reader in finding coordinates or locations of objects, they also provide a systematic structure for clustering visual map information ("spatial chunks"). The aim of this study was to examine whether different cartographic kinds of grids reduce spatial distortions and improve recall memory for object locations. Recall performance was measured as both the percentage of correctly recalled objects (hit rate) and the mean distance errors of correctly recalled objects (spatial accuracy). Different kinds of grids (continuous lines, dashed lines, crosses) were applied to topographic maps. These maps were also varied in their type of characteristic areas (LANDSCAPE) and different information layer compositions (DENSITY) to examine the effects of map complexity. The study involving 144 participants shows that all experimental cartographic factors (GRID, LANDSCAPE, DENSITY) improve recall performance and spatial accuracy of learned object locations. Overlaying a topographic map with a grid significantly reduces the mean distance errors of correctly recalled map objects. The paper includes a discussion of a square grid's usefulness concerning object location memory, independent of whether the grid is clearly visible (continuous or dashed lines) or only indicated by crosses.
Chiou, Rocco; Sowman, Paul F; Etchell, Andrew C; Rich, Anina N
2014-05-01
Object recognition benefits greatly from our knowledge of typical color (e.g., a lemon is usually yellow). Most research on object color knowledge focuses on whether both knowledge and perception of object color recruit the well-established neural substrates of color vision (the V4 complex). Compared with the intensive investigation of the V4 complex, we know little about where and how neural mechanisms beyond V4 contribute to color knowledge. The anterior temporal lobe (ATL) is thought to act as a "hub" that supports semantic memory by integrating different modality-specific contents into a meaningful entity at a supramodal conceptual level, making it a good candidate zone for mediating the mappings between object attributes. Here, we explore whether the ATL is critical for integrating typical color with other object attributes (object shape and name), akin to its role in combining nonperceptual semantic representations. In separate experimental sessions, we applied TMS to disrupt neural processing in the left ATL and a control site (the occipital pole). Participants performed an object naming task that probes color knowledge and elicits a reliable color congruency effect as well as a control quantity naming task that also elicits a cognitive congruency effect but involves no conceptual integration. Critically, ATL stimulation eliminated the otherwise robust color congruency effect but had no impact on the numerical congruency effect, indicating a selective disruption of object color knowledge. Neither color nor numerical congruency effects were affected by stimulation at the control occipital site, ruling out nonspecific effects of cortical stimulation. Our findings suggest that the ATL is involved in the representation of object concepts that include their canonical colors.
Dynamic Primitives of Motor Behavior
Hogan, Neville; Sternad, Dagmar
2013-01-01
We present in outline a theory of sensorimotor control based on dynamic primitives, which we define as attractors. To account for the broad class of human interactive behaviors—especially tool use—we propose three distinct primitives: submovements, oscillations and mechanical impedances, the latter necessary for interaction with objects. Due to fundamental features of the neuromuscular system, most notably its slow response, we argue that encoding in terms of parameterized primitives may be an essential simplification required for learning, performance, and retention of complex skills. Primitives may simultaneously and sequentially be combined to produce observable forces and motions. This may be achieved by defining a virtual trajectory composed of submovements and/or oscillations interacting with impedances. Identifying primitives requires care: in principle, overlapping submovements would be sufficient to compose all observed movements but biological evidence shows that oscillations are a distinct primitive. Conversely, we suggest that kinematic synergies, frequently discussed as primitives of complex actions, may be an emergent consequence of neuromuscular impedance. To illustrate how these dynamic primitives may account for complex actions, we briefly review three types of interactive behaviors: constrained motion, impact tasks, and manipulation of dynamic objects. PMID:23124919
Evaluation of microwave landing system approaches in a wide-body transport simulator
NASA Technical Reports Server (NTRS)
Summers, L. G.; Feather, J. B.
1992-01-01
The objective of this study was to determine the suitability of flying complex curved approaches using the microwave landing system (MLS) with a wide-body transport aircraft. Fifty pilots in crews of two participated in the evaluation using a fixed-base simulator that emulated an MD-11 aircraft. Five approaches, consisting of one straight-in approach and four curved approaches, were flown by the pilots using a flight director. The test variables include the following: (1) manual and autothrottles; (2) wind direction; and (3) type of navigation display. The navigation display was either a map or a horizontal situation indicator (HSI). A complex wind that changed direction and speed with altitude, and included moderate turbulence, was used. Visibility conditions were Cat 1 or better. Subjective test data included pilot responses to questionnaires and pilot comments. Objective performance data included tracking accuracy, position error at decision height, and control activity. Results of the evaluation indicate that flying curved MLS approaches with a wide-body transport aircraft is operationally acceptable, depending upon the length of the final straight segment and the complexity of the approach.
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
Optimisation of flight dynamic control based on many-objectives meta-heuristic: a comparative study
NASA Astrophysics Data System (ADS)
Bureerat, Sujin; Pholdee, Nantiwat; Radpukdee, Thana
2018-05-01
Development of many objective meta-heuristics (MnMHs) is a currently interesting topic as they are suitable to real applications of optimisation problems which usually require many ob-jectives. However, most of MnMHs have been mostly developed and tested based on stand-ard testing functions while the use of MnMHs to real applications is rare. Therefore, in this work, MnMHs are applied for optimisation design of flight dynamic control. The design prob-lem is posed to find control gains for minimising; the control effort, the spiral root, the damp-ing in roll root, sideslip angle deviation, and maximising; the damping ratio of the dutch-roll complex pair, the dutch-roll frequency, bank angle at pre-specified times 1 seconds and 2.8 second subjected to several constraints based on Military Specifications (1969) requirement. Several established many-objective meta-heuristics (MnMHs) are used to solve the problem while their performances are compared. With this research work, performance of several MnMHs for flight control is investigated. The results obtained will be the baseline for future development of flight dynamic and control.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
NASA Astrophysics Data System (ADS)
Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore
2017-10-01
This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
The Effects of Similarity on High-Level Visual Working Memory Processing.
Yang, Li; Mo, Lei
2017-01-01
Similarity has been observed to have opposite effects on visual working memory (VWM) for complex images. How can these discrepant results be reconciled? To answer this question, we used a change-detection paradigm to test visual working memory performance for multiple real-world objects. We found that working memory for moderate similarity items was worse than that for either high or low similarity items. This pattern was unaffected by manipulations of stimulus type (faces vs. scenes), encoding duration (limited vs. self-paced), and presentation format (simultaneous vs. sequential). We also found that the similarity effects differed in strength in different categories (scenes vs. faces). These results suggest that complex real-world objects are represented using a centre-surround inhibition organization . These results support the category-specific cortical resource theory and further suggest that centre-surround inhibition organization may differ by category.
Multi-objective engineering design using preferences
NASA Astrophysics Data System (ADS)
Sanchis, J.; Martinez, M.; Blasco, X.
2008-03-01
System design is a complex task when design parameters have to satisy a number of specifications and objectives which often conflict with those of others. This challenging problem is called multi-objective optimization (MOO). The most common approximation consists in optimizing a single cost index with a weighted sum of objectives. However, once weights are chosen the solution does not guarantee the best compromise among specifications, because there is an infinite number of solutions. A new approach can be stated, based on the designer's experience regarding the required specifications and the associated problems. This valuable information can be translated into preferences for design objectives, and will lead the search process to the best solution in terms of these preferences. This article presents a new method, which enumerates these a priori objective preferences. As a result, a single objective is built automatically and no weight selection need be performed. Problems occuring because of the multimodal nature of the generated single cost index are managed with genetic algorithms (GAs).
Parametric boundary reconstruction algorithm for industrial CT metrology application.
Yin, Zhye; Khare, Kedar; De Man, Bruno
2009-01-01
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
A Java application for tissue section image analysis.
Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C
2005-02-01
The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.
Some single-machine scheduling problems with learning effects and two competing agents.
Li, Hongjie; Li, Zeyuan; Yin, Yunqiang
2014-01-01
This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.
Approaches, field considerations and problems associated with radio tracking carnivores
Sargeant, A.B.; Amlaner, C. J.; MacDonald, D.W.
1979-01-01
The adaptation of radio tracking to ecological studies was a major technological advance affecting field investigations of animal movements and behavior. Carnivores have been the recipients of much attention with this new technology and study approaches have varied from simple to complex. Equipment performance has much improved over the years, but users still face many difficulties. The beginning of all radio tracking studies should be a precise definition of objectives. Study objectives dictate type of gear required and field procedures. Field conditions affect equipment performance and investigator ability to gather data. Radio tracking carnivores is demanding and generally requires greater time than anticipated. Problems should be expected and planned for in study design. Radio tracking can be an asset in carnivore studies but caution is needed in its application.
Learning viewpoint invariant object representations using a temporal coherence principle.
Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter
2005-07-01
Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.
A Complex Systems Approach to More Resilient Multi-Layered Security Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nathanael J. K.; Jones, Katherine A.; Bandlow, Alisa
In July 2012, protestors cut through security fences and gained access to the Y-12 National Security Complex. This was believed to be a highly reliable, multi-layered security system. This report documents the results of a Laboratory Directed Research and Development (LDRD) project that created a consistent, robust mathematical framework using complex systems analysis algorithms and techniques to better understand the emergent behavior, vulnerabilities and resiliency of multi-layered security systems subject to budget constraints and competing security priorities. Because there are several dimensions to security system performance and a range of attacks that might occur, the framework is multi-objective for amore » performance frontier to be estimated. This research explicitly uses probability of intruder interruption given detection (P I) as the primary resilience metric. We demonstrate the utility of this framework with both notional as well as real-world examples of Physical Protection Systems (PPSs) and validate using a well-established force-on-force simulation tool, Umbra.« less
Development of Three-Dimensional Completion of Complex Objects
ERIC Educational Resources Information Center
Soska, Kasey C.; Johnson, Scott P.
2013-01-01
Three-dimensional (3D) object completion, the ability to perceive the backs of objects seen from a single viewpoint, emerges at around 6 months of age. Yet, only relatively simple 3D objects have been used in assessing its development. This study examined infants' 3D object completion when presented with more complex stimuli. Infants…
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong
2012-12-15
The number of deteriorated multi-family housing complexes in South Korea continues to rise, and consequently their electricity consumption is also increasing. This needs to be addressed as part of the nation's efforts to reduce energy consumption. The objective of this research was to develop a decision support model for determining the need to improve multi-family housing complexes. In this research, 1664 cases located in Seoul were selected for model development. The research team collected the characteristics and electricity energy consumption data of these projects in 2009-2010. The following were carried out in this research: (i) using the Decision Tree, multi-family housing complexes were clustered based on their electricity energy consumption; (ii) using Case-Based Reasoning, similar cases were retrieved from the same cluster; and (iii) using a combination of Multiple Regression Analysis, Artificial Neural Network, and Genetic Algorithm, the prediction performance of the developed model was improved. The results of this research can be used as follows: (i) as basic research data for continuously managing several energy consumption data of multi-family housing complexes; (ii) as advanced research data for predicting energy consumption based on the project characteristics; (iii) as practical research data for selecting the most optimal multi-family housing complex with the most potential in terms of energy savings; and (iv) as consistent and objective criteria for incentives and penalties. Copyright © 2012 Elsevier Ltd. All rights reserved.
Object-Oriented Software Model for Battlefield Signal Transmission and Sensing
2009-12-01
21 Platform positioning and problem viability ...........................................................................23 7 Environmental ...The graphic on the title page of this report was created by Dr. Dale R. Hill (ERDC-CRREL). The authors thank Dr. George G. Koenig of ERDC-CRREL’s...Background The performance and utility of battlefield and homeland security sensors depends on many complex factors, both environmental and mission
The microorganisms used for working in microbial fuel cells
NASA Astrophysics Data System (ADS)
Konovalova, E. Yu.; Stom, D. I.; Zhdanova, G. O.; Yuriev, D. A.; Li, Youming; Barbora, Lepakshi; Goswami, Pranab
2018-04-01
Investigated the use as biological object in microbial fuel cells (MFC) of various microorganisms performing the transport of electrons in the processing of various substrates. Most MFC, uses complex substrates. Such MFC filled with associations of microorganisms. The article deals with certain types of microorganisms for use in the MFC, shows the characteristics of molecular electron transfer mechanisms microorganisms into the environment.
ERIC Educational Resources Information Center
Guerra, Nelson Pérez
2017-01-01
A laboratory experiment in which students study the kinetics of the Viscozyme-L-catalyzed hydrolysis of cellulose and starch comparatively was designed for an upper-division biochemistry laboratory. The main objective of this experiment was to provide an opportunity to perform enhanced enzyme kinetics data analysis using appropriate informatics…
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Resolving future fire management conflicts using multicriteria decision making.
Driscoll, Don A; Bode, Michael; Bradstock, Ross A; Keith, David A; Penman, Trent D; Price, Owen F
2016-02-01
Management strategies to reduce the risks to human life and property from wildfire commonly involve burning native vegetation. However, planned burning can conflict with other societal objectives such as human health and biodiversity conservation. These conflicts are likely to intensify as fire regimes change under future climates and as growing human populations encroach farther into fire-prone ecosystems. Decisions about managing fire risks are therefore complex and warrant more sophisticated approaches than are typically used. We applied a multicriteria decision making approach (MCDA) with the potential to improve fire management outcomes to the case of a highly populated, biodiverse, and flammable wildland-urban interface. We considered the effects of 22 planned burning options on 8 objectives: house protection, maximizing water quality, minimizing carbon emissions and impacts on human health, and minimizing declines of 5 distinct species types. The MCDA identified a small number of management options (burning forest adjacent to houses) that performed well for most objectives, but not for one species type (arboreal mammal) or for water quality. Although MCDA made the conflict between objectives explicit, resolution of the problem depended on the weighting assigned to each objective. Additive weighting of criteria traded off the arboreal mammal and water quality objectives for other objectives. Multiplicative weighting identified scenarios that avoided poor outcomes for any objective, which is important for avoiding potentially irreversible biodiversity losses. To distinguish reliably among management options, future work should focus on reducing uncertainty in outcomes across a range of objectives. Considering management actions that have more predictable outcomes than landscape fuel management will be important. We found that, where data were adequate, an MCDA can support decision making in the complex and often conflicted area of fire management. © 2015 Society for Conservation Biology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dverstorp, B.; Andersson, J.
1995-12-01
Performance Assessment of a nuclear waste repository implies an analysis of a complex system with many interacting processes. Even if some of these processes may be known to large detail, problems arise when combining all information, and means of abstracting information from complex detailed models into models that couple different processes are needed. Clearly, one of the major objectives of performance assessment, to calculate doses or other performance indicators, implies an enormous abstraction of information compared to all information that is used as input. Other problems are that the knowledge of different parts or processes is strongly variable and adjustments,more » interpretations, are needed when combining models from different disciplines. In addition, people as well as computers, even today, always have a limited capacity to process information and choices have to be made. However, because abstraction of information clearly is unavoidable in performance assessment the validity of choices made, always need to be scrutinized and judgements made need to be updated in an iterative process.« less
NASA Astrophysics Data System (ADS)
Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.
2016-12-01
Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.
Enhancing Cognition with Video Games: A Multiple Game Training Study
Oei, Adam C.; Patterson, Michael D.
2013-01-01
Background Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. Methodology/Principal Findings We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Conclusion/Significance Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects. PMID:23516504
Price, Margaux M; Crumley-Branyon, Jessica J; Leidheiser, William R
2016-01-01
Background Technology gains have improved tools for evaluating complex tasks by providing environmental supports (ES) that increase ease of use and improve performance outcomes through the use of information visualizations (info-vis). Complex info-vis emphasize the need to understand individual differences in abilities of target users, the key cognitive abilities needed to execute a decision task, and the graphical elements that can serve as the most effective ES. Older adults may be one such target user group that would benefit from increased ES to mitigate specific declines in cognitive abilities. For example, choosing a prescription drug plan is a necessary and complex task that can impact quality of life if the wrong choice is made. The decision to enroll in one plan over another can involve comparing over 15 plans across many categories. Within this context, the large amount of complex information and reduced working memory capacity puts older adults’ decision making at a disadvantage. An intentionally designed ES, such as an info-vis that reduces working memory demand, may assist older adults in making the most effective decision among many options. Objective The objective of this study is to examine whether the use of an info-vis can lower working memory demands and positively affect complex decision-making performance of older adults in the context of choosing a Medicare prescription drug plan. Methods Participants performed a computerized decision-making task in the context of finding the best health care plan. Data included quantitative decision-making performance indicators and surveys examining previous history with purchasing insurance. Participants used a colored info-vis ES or a table (no ES) to perform the decision task. Task difficulty was manipulated by increasing the number of selection criteria used to make an accurate decision. A repeated measures analysis was performed to examine differences between the two table designs. Results Twenty-three older adults between the ages of 66 and 80 completed the study. There was a main effect for accuracy such that older adults made more accurate decisions in the color info-vis condition than the table condition. In the low difficulty condition, participants were more successful at choosing the correct answer when the question was about the gap coverage attribute in the info-vis condition. Participants also made significantly faster decisions in the info-vis condition than in the table condition. Conclusions Reducing the working memory demand of the task through the use of an ES can improve decision accuracy, especially when selection criteria is only focused on a single attribute of the insurance plan. PMID:27251110
Cellular automata with object-oriented features for parallel molecular network modeling.
Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan
2005-06-01
Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.
MAIN software for density averaging, model building, structure refinement and validation
Turk, Dušan
2013-01-01
MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458
Deficits of long-term memory in ecstasy users are related to cognitive complexity of the task.
Brown, John; McKone, Elinor; Ward, Jeff
2010-03-01
Despite animal evidence that methylenedioxymethamphetamine (ecstasy) causes lasting damage in brain regions related to long-term memory, results regarding human memory performance have been variable. This variability may reflect the cognitive complexity of the memory tasks. However, previous studies have tested only a limited range of cognitive complexity. Furthermore, comparisons across different studies are made difficult by regional variations in ecstasy composition and patterns of use. The objective of this study is to evaluate ecstasy-related deficits in human verbal memory over a wide range of cognitive complexity using subjects drawn from a single geographical population. Ecstasy users were compared to non-drug using controls on verbal tasks with low cognitive complexity (stem completion), moderate cognitive complexity (stem-cued recall and word list learning) and high cognitive complexity (California Verbal Learning Test, Verbal Paired Associates and a novel Verbal Triplet Associates test). Where significant differences were found, both groups were also compared to cannabis users. More cognitively complex memory tasks were associated with clearer ecstasy-related deficits than low complexity tasks. In the most cognitively demanding task, ecstasy-related deficits remained even after multiple learning opportunities, whereas the performance of cannabis users approached that of non-drug using controls. Ecstasy users also had weaker deliberate strategy use than both non-drug and cannabis controls. Results were consistent with the proposal that ecstasy-related memory deficits are more reliable on tasks with greater cognitive complexity. This could arise either because such tasks require a greater contribution from the frontal lobe or because they require greater interaction between multiple brain regions.
Visual object tracking by correlation filters and online learning
NASA Astrophysics Data System (ADS)
Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei
2018-06-01
Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.
A generalized association test based on U statistics.
Wei, Changshuai; Lu, Qing
2017-07-01
Second generation sequencing technologies are being increasingly used for genetic association studies, where the main research interest is to identify sets of genetic variants that contribute to various phenotypes. The phenotype can be univariate disease status, multivariate responses and even high-dimensional outcomes. Considering the genotype and phenotype as two complex objects, this also poses a general statistical problem of testing association between complex objects. We here proposed a similarity-based test, generalized similarity U (GSU), that can test the association between complex objects. We first studied the theoretical properties of the test in a general setting and then focused on the application of the test to sequencing association studies. Based on theoretical analysis, we proposed to use Laplacian Kernel-based similarity for GSU to boost power and enhance robustness. Through simulation, we found that GSU did have advantages over existing methods in terms of power and robustness. We further performed a whole genome sequencing (WGS) scan for Alzherimer's disease neuroimaging initiative data, identifying three genes, APOE , APOC1 and TOMM40 , associated with imaging phenotype. We developed a C ++ package for analysis of WGS data using GSU. The source codes can be downloaded at https://github.com/changshuaiwei/gsu . weichangshuai@gmail.com ; qlu@epi.msu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Brady, Timothy F.; Alvarez, George A.
2015-01-01
A central question for models of visual working memory is whether the number of objects people can remember depends on object complexity. Some influential "slot" models of working memory capacity suggest that people always represent 3-4 objects and that only the fidelity with which these objects are represented is affected by object…
NASA Technical Reports Server (NTRS)
Ouzts, Peter J.; Soloway, Donald I.; Moerder, Daniel D.; Wolpert, David H.; Benavides, Jose Victor
2009-01-01
Airbreathing hypersonic systems offer distinct performance advantages over rocket-based systems for space access vehicles. However, these performance advantages are dependent upon advances in current state-of-the-art technologies in many areas such as ram/scramjet propulsion integration, high temperature materials, aero-elastic structures, thermal protection systems, transition to hypersonics and hypersonic control elements within the framework of complex physics and new design methods. The complex interactions between elements of an airbreathing hypersonic vehicle represent a new paradigm in vehicle design to achieve the optimal performance necessary to meet space access mission objectives. In the past, guidance, navigation, and control (GNC) analysis often follows completion of the vehicle conceptual design process. Individual component groups design subsystems which are then integrated into a vehicle configuration. GNC is presented the task of developing control approaches to meet vehicle performance objectives given that configuration. This approach may be sufficient for vehicles where significant performance margins exist. However, for higher performance vehicles engaging the GNC discipline too late in the design cycle has been costly. For example, the X-29 experimental flight vehicle was built as a technology demonstrator. One of the many technologies to be demonstrated was the use of light-weight material composites for structural components. The use of light-weight materials increased the flexibility of the X- 29 beyond that of conventional metal alloy constructed aircraft. This effect was not considered when the vehicle control system was designed and built. The impact of this is that the control system did not have enough control authority to compensate for the effects of the first fundamental structural mode of the vehicle. As a result, the resulting pitch rate response of the vehicle was below specification and no post-design changes could recover the desired capability.
Lan, Ruixia; Tran, Hoainam; Kim, Inho
2017-03-01
Probiotics can serve as alternatives to antibiotics to increase the performance of weaning pigs, and the intake of probiotics is affected by dietary nutrient density. The objective of this study was to evaluate the effects of a probiotic complex in different nutrient density diets on growth performance, digestibility, blood profiles, fecal microflora and noxious gas emission in weaning pigs. From day 22 to day 42, both high-nutrient-density and probiotic complex supplementation diets increased (P < 0.05) the average daily gain. On day 42, the apparent total tract digestibility (ATTD) of dry matter, nitrogen and gross energy (GE), blood urea nitrogen concentration and NH 3 and H 2 S emissions were increased (P < 0.05) in pigs fed high-nutrient-density diets. Pigs fed probiotic complex supplementation diets had higher (P < 0.05) ATTD of GE than pigs fed non-supplemented diets. Fecal Lactobacillus counts were increased whereas Escherichia coli counts and NH 3 and H 2 S emissions were decreased (P < 0.05) in pigs fed probiotic complex supplementation diets. Interactive effects on average daily feed intake (ADFI) were observed from day 22 to day 42 and overall, where probiotic complex improved ADFI more dramatically in low-nutrient-density diets. The beneficial effects of probiotic complex (Bacillus coagulans, Bacillus licheniformis, Bacillus subtilis and Clostridium butyricum) supplementation on ADFI is more dramatic with low-nutrient-density diets. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Automated object-based classification of topography from SRTM data
Drăguţ, Lucian; Eisank, Clemens
2012-01-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060
Rasmussen's model of human behavior in laparoscopy training.
Wentink, M; Stassen, L P S; Alwayn, I; Hosman, R J A W; Stassen, H G
2003-08-01
Compared to aviation, where virtual reality (VR) training has been standardized and simulators have proven their benefits, the objectives, needs, and means of VR training in minimally invasive surgery (MIS) still have to be established. The aim of the study presented is to introduce Rasmussen's model of human behavior as a practical framework for the definition of the training objectives, needs, and means in MIS. Rasmussen distinguishes three levels of human behavior: skill-, rule-, and knowledge-based behaviour. The training needs of a laparoscopic novice can be determined by identifying the specific skill-, rule-, and knowledge-based behavior that is required for performing safe laparoscopy. Future objectives of VR laparoscopy trainers should address all three levels of behavior. Although most commercially available simulators for laparoscopy aim at training skill-based behavior, especially the training of knowledge-based behavior during complications in surgery will improve safety levels. However, the cost and complexity of a training means increases when the training objectives proceed from the training of skill-based behavior to the training of complex knowledge-based behavior. In aviation, human behavior models have been used successfully to integrate the training of skill-, rule-, and knowledge-based behavior in a full flight simulator. Understanding surgeon behavior is one of the first steps towards a future full-scale laparoscopy simulator.
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Classifier-Guided Sampling for Complex Energy System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less
Multigrid contact detection method
NASA Astrophysics Data System (ADS)
He, Kejing; Dong, Shoubin; Zhou, Zhaoyao
2007-03-01
Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.
Automated object-based classification of topography from SRTM data
NASA Astrophysics Data System (ADS)
Drăguţ, Lucian; Eisank, Clemens
2012-03-01
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.
Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.
Kraft, James M; Maloney, Shannon I; Brainard, David H
2002-01-01
Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.
Frame sequences analysis technique of linear objects movement
NASA Astrophysics Data System (ADS)
Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.
2017-12-01
Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.
DeCaro, Renee; Peelle, Jonathan E; Grossman, Murray; Wingfield, Arthur
2016-01-01
Reduced hearing acuity is among the most prevalent of chronic medical conditions among older adults. An experiment is reported in which comprehension of spoken sentences was tested for older adults with good hearing acuity or with a mild-to-moderate hearing loss, and young adults with age-normal hearing. Comprehension was measured by participants' ability to determine the agent of an action in sentences that expressed this relation with a syntactically less complex subject-relative construction or a syntactically more complex object-relative construction. Agency determination was further challenged by inserting a prepositional phrase into sentences between the person performing an action and the action being performed. As a control, prepositional phrases of equivalent length were also inserted into sentences in a non-disruptive position. Effects on sentence comprehension of age, hearing acuity, prepositional phrase placement and sound level of stimulus presentations appeared only for comprehension of sentences with the more syntactically complex object-relative structures. Working memory as tested by reading span scores accounted for a significant amount of the variance in comprehension accuracy. Once working memory capacity and hearing acuity were taken into account, chronological age among the older adults contributed no further variance to comprehension accuracy. Results are discussed in terms of the positive and negative effects of sensory-cognitive interactions in comprehension of spoken sentences and lend support to a framework in which domain-general executive resources, notably verbal working memory, play a role in both linguistic and perceptual processing.
NASA Astrophysics Data System (ADS)
Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.
2018-05-01
The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.
Peterson, Lenna X; Shin, Woong-Hee; Kim, Hyungrae; Kihara, Daisuke
2018-03-01
We report our group's performance for protein-protein complex structure prediction and scoring in Round 37 of the Critical Assessment of PRediction of Interactions (CAPRI), an objective assessment of protein-protein complex modeling. We demonstrated noticeable improvement in both prediction and scoring compared to previous rounds of CAPRI, with our human predictor group near the top of the rankings and our server scorer group at the top. This is the first time in CAPRI that a server has been the top scorer group. To predict protein-protein complex structures, we used both multi-chain template-based modeling (TBM) and our protein-protein docking program, LZerD. LZerD represents protein surfaces using 3D Zernike descriptors (3DZD), which are based on a mathematical series expansion of a 3D function. Because 3DZD are a soft representation of the protein surface, LZerD is tolerant to small conformational changes, making it well suited to docking unbound and TBM structures. The key to our improved performance in CAPRI Round 37 was to combine multi-chain TBM and docking. As opposed to our previous strategy of performing docking for all target complexes, we used TBM when multi-chain templates were available and docking otherwise. We also describe the combination of multiple scoring functions used by our server scorer group, which achieved the top rank for the scorer phase. © 2017 Wiley Periodicals, Inc.
Ogourtsova, Tatiana; Archambault, Philippe; Sangani, Samir; Lamontagne, Anouk
2018-01-01
Unilateral spatial neglect (USN) is a highly prevalent and disabling poststroke impairment. USN is traditionally assessed with paper-and-pencil tests that lack ecological validity, generalization to real-life situations and are easily compensated for in chronic stages. Virtual reality (VR) can, however, counteract these limitations. We aimed to examine the feasibility of a novel assessment of USN symptoms in a functional shopping activity, the Ecological VR-based Evaluation of Neglect Symptoms (EVENS). EVENS is immersive and consists of simple and complex 3-dimensional scenes depicting grocery shopping shelves, where joystick-based object detection and navigation tasks are performed while seated. Effects of virtual scene complexity on navigational and detection abilities in patients with (USN+, n = 12) and without (USN-, n = 15) USN following a right hemisphere stroke and in age-matched healthy controls (HC, n = 9) were determined. Longer detection times, larger mediolateral deviations from ideal paths and longer navigation times were found in USN+ versus USN- and HC groups, particularly in the complex scene. EVENS detected lateralized and nonlateralized USN-related deficits, performance alterations that were dependent or independent of USN severity, and performance alterations in 3 USN- subjects versus HC. EVENS' environmental changing complexity, along with the functional tasks of far space detection and navigation can potentially be clinically relevant and warrant further empirical investigation. Findings are discussed in terms of attentional models, lateralized versus nonlateralized deficits in USN, and tasks-specific mechanisms.
Design and Use of a Learning Object for Finding Complex Polynomial Roots
ERIC Educational Resources Information Center
Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime
2013-01-01
Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…
Achieving realistic performance and decison-making capabilities in computer-generated air forces
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.; Santos, Eugene, Jr.; Zurita, Vincent B.; Benslay, James L., Jr.
1997-07-01
For a computer-generated force (CGF) system to be useful in training environments, it must be able to operate at multiple skill levels, exhibit competency at assigned missions, and comply with current doctrine. Because of the rapid rate of change in distributed interactive simulation (DIS) and the expanding set of performance objectives for any computer- generated force, the system must also be modifiable at reasonable cost and incorporate mechanisms for learning. Therefore, CGF applications must have adaptable decision mechanisms and behaviors and perform automated incorporation of past reasoning and experience into its decision process. The CGF must also possess multiple skill levels for classes of entities, gracefully degrade its reasoning capability in response to system stress, possess an expandable modular knowledge structure, and perform adaptive mission planning. Furthermore, correctly performing individual entity behaviors is not sufficient. Issues related to complex inter-entity behavioral interactions, such as the need to maintain formation and share information, must also be considered. The CGF must also be able to acceptably respond to unforeseen circumstances and be able to make decisions in spite of uncertain information. Because of the need for increased complexity in the virtual battlespace, the CGF should exhibit complex, realistic behavior patterns within the battlespace. To achieve these necessary capabilities, an extensible software architecture, an expandable knowledge base, and an adaptable decision making mechanism are required. Our lab has addressed these issues in detail. The resulting DIS-compliant system is called the automated wingman (AW). The AW is based on fuzzy logic, the common object database (CODB) software architecture, and a hierarchical knowledge structure. We describe the techniques we used to enable us to make progress toward a CGF entity that satisfies the requirements presented above. We present our design and implementation of an adaptable decision making mechanism that uses multi-layered, fuzzy logic controlled situational analysis. Because our research indicates that fuzzy logic can perform poorly under certain circumstances, we combine fuzzy logic inferencing with adversarial game tree techniques for decision making in strategic and tactical engagements. We describe the approach we employed to achieve this fusion. We also describe the automated wingman's system architecture and knowledge base architecture.
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.
Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M
2015-08-01
Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimization Techniques for Clustering,Connectivity, and Flow Problems in Complex Networks
2012-10-01
discrete optimization and for analysis of performance of algorithm portfolios; introducing a metaheuristic framework of variable objective search that...The results of empirical evaluation of the proposed algorithm are also included. 1.3 Theoretical analysis of heuristics and designing new metaheuristic ...analysis of heuristics for inapproximable problems and designing new metaheuristic approaches for the problems of interest; (IV) Developing new models
NASA Astrophysics Data System (ADS)
Tabrizian, P.; Petrasova, A.; Baran, P.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2017-12-01
Viewshed modelling- a process of defining, parsing and analysis of landscape visual space's structure within GIS- has been commonly used in applications ranging from landscape planning and ecosystem services assessment to geography and archaeology. However, less effort has been made to understand whether and to what extent these objective analyses predict actual on-the-ground perception of human observer. Moreover, viewshed modelling at the human-scale level require incorporation of fine-grained landscape structure (eg., vegetation) and patterns (e.g, landcover) that are typically omitted from visibility calculations or unrealistically simulated leading to significant error in predicting visual attributes. This poster illustrates how photorealistic Immersive Virtual Environments and high-resolution geospatial data can be used to integrate objective and subjective assessments of visual characteristics at the human-scale level. We performed viewshed modelling for a systematically sampled set of viewpoints (N=340) across an urban park using open-source GIS (GRASS GIS). For each point a binary viewshed was computed on a 3D surface model derived from high-density leaf-off LIDAR (QL2) points. Viewshed map was combined with high-resolution landcover (.5m) derived through fusion of orthoimagery, lidar vegetation, and vector data. Geo-statistics and landscape structure analysis was performed to compute topological and compositional metrics for visual-scale (e.g., openness), complexity (pattern, shape and object diversity), and naturalness. Based on the viewshed model output, a sample of 24 viewpoints representing the variation of visual characteristics were selected and geolocated. For each location, 360o imagery were captured using a DSL camera mounted on a GIGA PAN robot. We programmed a virtual reality application through which human subjects (N=100) immersively experienced a random representation of selected environments via a head-mounted display (Oculus Rift CV1), and rated each location on perceived openness, naturalness and complexity. Regression models were performed to correlate model outputs with participants' responses. The results indicated strong, significant correlations for openness, and naturalness and moderate correlation for complexity estimations.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
National Wind Tunnel Complex (NWTC)
NASA Technical Reports Server (NTRS)
1996-01-01
The National Wind Tunnel Complex (NWTC) Final Report summarizes the work carried out by a unique Government/Industry partnership during the period of June 1994 through May 1996. The objective of this partnership was to plan, design, build and activate 'world class' wind tunnel facilities for the development of future-generation commercial and military aircraft. The basis of this effort was a set of performance goals defined by the National Facilities Study (NFS) Task Group on Aeronautical Research and Development Facilities which established two critical measures of improved wind tunnel performance; namely, higher Reynolds number capability and greater productivity. Initial activities focused upon two high-performance tunnels (low-speed and transonic). This effort was later descoped to a single multipurpose tunnel. Beginning in June 1994, the NWTC Project Office defined specific performance requirements, planned site evaluation activities, performed a series of technical/cost trade studies, and completed preliminary engineering to support a proposed conceptual design. Due to budget uncertainties within the Federal government, the NWTC project office was directed to conduct an orderly closure following the Systems Design Review in March 1996. This report provides a top-level status of the project at that time. Additional details of all work performed have been archived and are available for future reference.
EUV spectroscopy of high-redshift x-ray objects
NASA Astrophysics Data System (ADS)
Kowalski, M. P.; Wolff, M. T.; Wood, K. S.; Barbee, T. W., Jr.; Barstow, M. A.
2010-07-01
As astronomical observations are pushed to cosmological distances (z>3) the spectral energy distributions of X-ray objects, AGN for example, will be redshifted into the EUV waveband. Consequently, a wealth of critical spectral diagnostics, provided by, for example, the Fe L-shell complex and the O VII/VIII lines, will be lost to future planned X-ray missions (e.g., IXO, Gen-X) if operated at traditional X-ray energies. This opens up a critical gap in performance located at short EUV wavelengths, where critical X-ray spectral transitions occur in high-z objects. However, normal-incidence multilayer-grating technology, which performs best precisely at such wavelengths, together with advanced nanolaminate replication techniques have been developed and are now mature to the point where advanced EUV instrument designs with performance complementary to IXO and Gen-X are practical. Such EUV instruments could be flown either independently or as secondary instruments on these X-ray missions. We present here a critical examination of the limits placed on extragalactic EUV measurements by ISM absorption, the range where high-z measurements are practical, and the requirements this imposes on next-generation instrument designs. We conclude with a discussion of a breakthrough technology, nanolaminate replication, which enables such instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naumann, Axel; /CERN; Canal, Philippe
2008-01-01
High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing data, C++ is a problematic choice: it offers no support for serialization, type definitions are amazingly complex to parse, and the dependency analysis (what does object A need to be stored?) is incredibly difficult. Nevertheless, the LHC data consists of C++ objects that are serialized with help from ROOT's reflection database and interpreter CINT. The fact that we can do it on that scale, and the performance with which we do it makes this approach unique andmore » stirs interest even outside HEP. I will show how CINT collects and stores information about C++ types, what the current major challenges are (dictionary size), and what CINT and ROOT have done and plan to do about it.« less
Modeling Complex Cross-Systems Software Interfaces Using SysML
NASA Technical Reports Server (NTRS)
Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin
2013-01-01
The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G
2017-01-01
Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .
Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.
2016-01-01
Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590
Simulation and testing of pyramid and barrel vault skylights
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGowan, A.G.; Desjarlais, A.O.; Wright, J.L.
1998-10-01
The thermal performance of fenestration in commercial buildings can have a significant effect on building loads--yet there is little information on the performance of these products. With this in mind, ASHRAE TC 4.5, Fenestration, commissioned a research project involving test and simulation of commercial fenestration systems. The objectives of ASHRAE Research Project 877 were: to evaluate the thermal performance (U-factors) of commonly used commercial glazed roof and wall assemblies; to obtain a better fundamental understanding of the heat transfer processes that occur in these specialty fenestration products; to develop correlations for natural-convection heat transfer in complex glazing cavities; to developmore » a methodology for evaluating complex fenestration products, suitable for inclusion in ASHRAE Standard 142P (ASHRAE 1996); and to generate U-factors for common commercial fenestration products, suitable for inclusion in the ASHRAE Handbook--Fundamentals. This paper describes testing and simulation of pyramid and barrel vault skylight specimens and provides guidelines for modeling these systems based on the validated results.« less
Smith, Marie L; Cesana, M Letizia; Farran, Emily K; Karmiloff-Smith, Annette; Ewing, Louise
2018-06-01
Few would argue that the unique insights brought by studying the typical and atypical development of psychological processes are essential to building a comprehensive understanding of the brain. Often, however, the associated challenges of working with non-standard adult populations results in the more complex psychophysical paradigms being rejected as too complex. Recently we created a child- (and clinical group) friendly implementation of one such technique - the reverse-correlation Bubbles approach - and noted an associated performance boost in adult participants. Here, we compare the administration of three different versions of this participant-friendly task in the same adult participants to empirically confirm that introducing elements in the experiment with the sole purpose of improving the participant experience, not only boosts the participant's engagement and motivation for the task but results in a significantly improved objective task performance and stronger statistical results.
NASA Technical Reports Server (NTRS)
Al-Jaar, Robert Y.; Desrochers, Alan A.
1989-01-01
The main objective of this research is to develop a generic modeling methodology with a flexible and modular framework to aid in the design and performance evaluation of integrated manufacturing systems using a unified model. After a thorough examination of the available modeling methods, the Petri Net approach was adopted. The concurrent and asynchronous nature of manufacturing systems are easily captured by Petri Net models. Three basic modules were developed: machine, buffer, and Decision Making Unit. The machine and buffer modules are used for modeling transfer lines and production networks. The Decision Making Unit models the functions of a computer node in a complex Decision Making Unit Architecture. The underlying model is a Generalized Stochastic Petri Net (GSPN) that can be used for performance evaluation and structural analysis. GSPN's were chosen because they help manage the complexity of modeling large manufacturing systems. There is no need to enumerate all the possible states of the Markov Chain since they are automatically generated from the GSPN model.
Low-complexity object detection with deep convolutional neural network for embedded systems
NASA Astrophysics Data System (ADS)
Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong
2017-09-01
We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.
Development and evaluation of a musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
Metabolic and nutritional aspects of cancer.
Krawczyk, Joanna; Kraj, Leszek; Ziarkiewicz, Mateusz; Wiktor-Jędrzejczak, Wiesław
2014-08-22
Cancer, being in fact a generalized disease involving the whole organism, is most frequently associated with metabolic deregulation, a latent inflammatory state and anorexia of various degrees. The pathogenesis of this disorder is complex, with multiple dilemmas remaining unsolved. The clinical consequences of the above-mentioned disturbances include cancer-related cachexia and anorexia-cachexia syndrome. These complex clinical entities worsen the prognosis, and lead to deterioration of the quality of life and performance status, and thus require multimodal treatment. Optimal therapy should include nutritional support coupled with pharmacotherapy targeted at underlying pathomechanisms of cachexia. Nevertheless, many issues still need explanation, and efficacious and comprehensive therapy of cancer-related cachexia remains a future objective.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Evaluation of the Possible Utilization of 68Ga-DOTATOC in Diagnosis of Adenocarcinoma Breast Cancer
Zolghadri, Samaneh; Naderi, Mojdeh; Yousefnia, Hassan; Alirezapour, Behrouz; Beiki, Davood
2018-01-01
Objective(s): Studies have indicated advantageous properties of [DOTA-DPhe1, Tyr3] octreotide (DOTATOC) in tumor models and labeling with gallium. Breast cancer is the second leading cause of cancer mortality in women, and most of these cancers are often an adenocarcinoma. Due to the importance of target to non-target ratios in the efficacy of diagnosis, the pharmacokinetic of 68Ga-DOTATOC in an adenocarcinoma breast cancer animal model was studied in this research, and the optimized time for imaging was determined. Methods: 68Ga was obtained from 68Ge/68Ga generator. The complex was prepared at optimized conditions. Radiochemical purity of the complex was checked using both HPLC and ITLC methods. Biodistribution of the complex was studied in BALB/c mice bearing adenocarcinoma breast cancer. Also, PET/CT imaging was performed up to 120 min post injection. Results: The complex was produced with radiochemical purity of greater than 98% and specific activity of about 40 GBq/mM at optimized conditions. Biodistribution of the complex was studied in BALB/c mice bearing adenocarcinoma breast cancer indicated fast blood clearance and significant uptake in the tumor. Significant tumor: blood and tumor:muscle uptake ratios were observed even at early times post-injection. PET/CT images were also confirmed the considerable accumulation of the tracer in the tumor. Conclusion: Generally, the results proved the possible application of the radiolabelled complex for the detection of the adenocarcinoma breast cancer and according to the pharmacokenitic data, the suitable time for imaging was determined as at least 30 min after injection. PMID:29333466
Imaging through turbulence using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.
Visual Short-Term Memory Capacity for Simple and Complex Objects
ERIC Educational Resources Information Center
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-01-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less
Navarro, Xavier
2016-02-01
Peripheral nerve injuries usually lead to severe loss of motor, sensory and autonomic functions in the patients. Due to the complex requirements for adequate axonal regeneration, functional recovery is often poorly achieved. Experimental models are useful to investigate the mechanisms related to axonal regeneration and tissue reinnervation, and to test new therapeutic strategies to improve functional recovery. Therefore, objective and reliable evaluation methods should be applied for the assessment of regeneration and function restitution after nerve injury in animal models. This review gives an overview of the most useful methods to assess nerve regeneration, target reinnervation and recovery of complex sensory and motor functions, their values and limitations. The selection of methods has to be adequate to the main objective of the research study, either enhancement of axonal regeneration, improving regeneration and reinnervation of target organs by different types of nerve fibres, or increasing recovery of complex sensory and motor functions. It is generally recommended to use more than one functional method for each purpose, and also to perform morphological studies of the injured nerve and the reinnervated targets. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won
2014-01-01
In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
NASA Astrophysics Data System (ADS)
Yin, Y.; Sonka, M.
2010-03-01
A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).
Tactile agnosia. Underlying impairment and implications for normal tactile object recognition.
Reed, C L; Caselli, R J; Farah, M J
1996-06-01
In a series of experimental investigations of a subject with a unilateral impairment of tactile object recognition without impaired tactile sensation, several issues were addressed. First, is tactile agnosia secondary to a general impairment of spatial cognition? On tests of spatial ability, including those directed at the same spatial integration process assumed to be taxed by tactile object recognition, the subject performed well, implying a more specific impairment of high level, modality specific tactile perception. Secondly, within the realm of high level tactile perception, is there a distinction between the ability to derive shape ('what') and spatial ('where') information? Our testing showed an impairment confined to shape perception. Thirdly, what aspects of shape perception are impaired in tactile agnosia? Our results indicate that despite accurate encoding of metric length and normal manual exploration strategies, the ability tactually to perceive objects with the impaired hand, deteriorated as the complexity of shape increased. In addition, asymmetrical performance was not found for other body surfaces (e.g. her feet). Our results suggest that tactile shape perception can be disrupted independent of general spatial ability, tactile spatial ability, manual shape exploration, or even the precise perception of metric length in the tactile modality.
How high is visual short-term memory capacity for object layout?
Sanocki, Thomas; Sellers, Eric; Mittelstadt, Jeff; Sulman, Noah
2010-05-01
Previous research measuring visual short-term memory (VSTM) suggests that the capacity for representing the layout of objects is fairly high. In four experiments, we further explored the capacity of VSTM for layout of objects, using the change detection method. In Experiment 1, participants retained most of the elements in displays of 4 to 8 elements. In Experiments 2 and 3, with up to 20 elements, participants retained many of them, reaching a capacity of 13.4 stimulus elements. In Experiment 4, participants retained much of a complex naturalistic scene. In most cases, increasing display size caused only modest reductions in performance, consistent with the idea of configural, variable-resolution grouping. The results indicate that participants can retain a substantial amount of scene layout information (objects and locations) in short-term memory. We propose that this is a case of remote visual understanding, where observers' ability to integrate information from a scene is paramount.
Kirjavainen, Minna; Kidd, Evan; Lieven, Elena
2017-01-01
We report three studies (one corpus, two experimental) that investigated the acquisition of relative clauses (RCs) in Finnish-speaking children. Study 1 found that Finnish children's naturalistic exposure to RCs predominantly consists of non-subject relatives (i.e. oblique, object) which typically have inanimate head nouns. Study 2 tested children's comprehension of subject, object, and two types of oblique relatives. No difference was found in the children's performance on different structures, including a lack of previously widely reported asymmetry between subject and object relatives. However, children's comprehension was modulated by animacy of the head referent. Study 3 tested children's production of the same RC structures using sentence repetition. Again we found no subject-object asymmetry. The pattern of results suggested that distributional frequency patterns and the relative complexity of the relativizer contribute to the difficulty associated with particular RC structures.
Enclosure Transform for Interest Point Detection From Speckle Imagery.
Yongjian Yu; Jue Wang
2017-03-01
We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.
Development of Piagetian object permanence in a grey parrot (Psittacus erithacus).
Pepperberg, I M; Willner, M R; Gravitz, L B
1997-03-01
The authors evaluated the ontogenetic performance of a grey parrot (Psittacus erithacus) on object permanence tasks designed for human infants. Testing began when the bird was 8 weeks old, prior to fledging and weaning. Because adult grey parrots understand complex invisible displacements (I. M. Pepperberg & F. A. Kozak, 1986), the authors continued weekly testing until the current subject completed all of I. C. Uzgiris and J. Hunt's (1975) Scale 1 tasks. Stage 6 object permanence with respect to these tasks emerged at 22 weeks, after the bird had fledged but before it was completely weaned. Although the parrot progressed more rapidly overall than other species that have been tested ontogenetically, the subject similarly exhibited a behavioral plateau part way through the study. Additional tests, administered at 8 and 12 months as well as to an adult grey parrot, demonstrated, respectively, that these birds have some representation of a hidden object and understand advanced invisible displacements.
NASA Technical Reports Server (NTRS)
Leibfried, T. F., Jr.; Davari, Sadegh; Natarajan, Swami; Zhao, Wei
1992-01-01
Two categories were chosen for study: the issue of using a preprocessor on Ada code of Application Programs which would interface with the Run-Time Object Data Base Standard Services (RODB STSV), the intent was to catch and correct any mis-registration errors of the program coder between the user declared Objects, their types, their addresses, and the corresponding RODB definitions; and RODB STSV Performance Issues and Identification of Problems with the planned methods for accessing Primitive Object Attributes, this included the study of an alternate storage scheme to the 'store objects by attribute' scheme in the current design of the RODB. The study resulted in essentially three separate documents, an interpretation of the system requirements, an assessment of the preliminary design, and a detailing of the components of a detailed design.
Task 7: Endwall treatment inlet flow distortion analysis
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Heidegger, N. J.; McNulty, G. S.; Weber, K. F.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields, and to perform a series of detailed numerical predictions to assess the effectiveness of various endwall treatments for enhancing the efficiency and stall margin of modern high speed fan rotors. Particular attention was given to examining the effectiveness of endwall treatments to counter the undesirable effects of inflow distortion. Calculations were performed using three different gridding techniques based on the type of casing treatment being tested and the level of complexity desired in the analysis. In each case, the casing treatment itself is modeled as a discrete object in the overall analysis, and the flow through the casing treatment is determined as part of the solution. A series of calculations were performed for both treated and untreated modern fan rotors both with and without inflow distortion. The effectiveness of the various treatments were quantified, and several physical mechanisms by which the effectiveness of endwall treatments is achieved are discussed.
Virtual tape measure for the operating microscope: system specifications and performance evaluation.
Kim, M Y; Drake, J M; Milgram, P
2000-01-01
The Virtual Tape Measure for the Operating Microscope (VTMOM) was created to assist surgeons in making accurate 3D measurements of anatomical structures seen in the surgical field under the operating microscope. The VTMOM employs augmented reality techniques by combining stereoscopic video images with stereoscopic computer graphics, and functions by relying on an operator's ability to align a 3D graphic pointer, which serves as the end-point of the virtual tape measure, with designated locations on the anatomical structure being measured. The VTMOM was evaluated for its baseline and application performances as well as its application efficacy. Baseline performance was determined by measuring the mean error (bias) and standard deviation of error (imprecision) in measurements of non-anatomical objects. Application performance was determined by comparing the error in measuring the dimensions of aneurysm models with and without the VTMOM. Application efficacy was determined by comparing the error in selecting the appropriate aneurysm clip size with and without the VTMOM. Baseline performance indicated a bias of 0.3 mm and an imprecision of 0.6 mm. Application bias was 3.8 mm and imprecision was 2.8 mm for aneurysm diameter. The VTMOM did not improve aneurysm clip size selection accuracy. The VTMOM is a potentially accurate tool for use under the operating microscope. However, its performance when measuring anatomical objects is highly dependent on complex visual features of the object surfaces. Copyright 2000 Wiley-Liss, Inc.
Control Software for the VERITAS Cerenkov Telescope System
NASA Astrophysics Data System (ADS)
Krawczynski, H.; Olevitch, M.; Sembroski, G.; Gibbs, K.
2003-07-01
The VERITAS collab oration is developing a system of initially 4 and ˇ eventually 7 Cerenkov telescopes of the 12 m diameter class for high sensitivity gamma-ray astronomy in the >50 GeV energy range. In this contribution we describe the software that controls and monitors the various VERITAS subsystems. The software uses an object-oriented approach to cop e with the complexities that arise from using sub-groups of the 7 VERITAS telescopes to observe several sources at the same time. Inter-pro cess communication is based on the CORBA object Request Broker proto col and watch-dog processes monitor the sub-system performance.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
A SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES
Evaluation and analysis of multiple objectives are very important in designing environmentally benign processes. They require a systematic procedure for solving multi-objective decision-making problems due to the complex nature of the problems and the need for complex assessment....
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
Action and object word writing in a case of bilingual aphasia.
Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil
2012-01-01
We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
Using Risk Assessment Methodologies to Meet Management Objectives
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2015-01-01
Corporate and program objectives focus on desired performance and results. ?Management decisions that affect how to meet these objectives now involve a complex mix of: technology, safety issues, operations, process considerations, employee considerations, regulatory requirements, financial concerns and legal issues. ?Risk Assessments are a tool for decision makers to understand potential consequences and be in a position to reduce, mitigate or eliminate costly mistakes or catastrophic failures. Using a risk assessment methodology is only a starting point. ?A risk assessment program provides management with important input in the decision making process. ?A pro-active organization looks to the future to avoid problems, a reactive organization can be blindsided by risks that could have been avoided. ?You get out what you put in, how useful your program is will be up to the individual organization.
Connectionist model-based stereo vision for telerobotics
NASA Technical Reports Server (NTRS)
Hoff, William; Mathis, Donald
1989-01-01
Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.
Bioavailability enhancement of curcumin by complexation with phosphatidyl choline.
Gupta, Nishant Kumar; Dixit, Vinod Kumar
2011-05-01
Curcumin is a major constituent of rhizomes of Curcuma longa. Pharmacokinetic studies of curcumin reveal its poor absorption through intestine. Objective of the present study was to enhance bioavailability of curcumin by its complexation with phosphatidyl choline (PC). Complex of curcumin was prepared with PC and characterized on the basis of solubility, melting point, differential scanning calorimetry, thin layer chromatography, and infrared spectroscopic analysis. Everted intestine sac technique was used to study ex vivo drug absorption of curcumin-PC (CU-PC) complex and plain curcumin. Pharmacokinetic studies were performed in rats, and hepatoprotective activity of CU-PC complex was also compared with curcumin and CU-PC physical mixture in isolated rat hepatocytes. Analytical reports along with spectroscopic data revealed the formation of complex. The results of ex vivo study show that CU-PC complex has significantly increased absorption compared with curcumin, when given in equimolar doses. Complex showed enhanced bioavailability, improved pharmacokinetics, and increased hepatoprotective activity as compared with curcumin or CU-PC physical mixture. Enhanced bioavailability of CU-PC complex may be due to the amphiphilic nature of the complex, which greatly enhance the water and lipid solubility of the curcumin. The present study clearly indicates the superiority of complex over curcumin, in terms of better absorption, enhanced bioavailability, and improved pharmacokinetics. Copyright © 2010 Wiley-Liss, Inc.
Canaway, Rachel; Bismark, Marie; Dunt, David; Kelaher, Margaret
2017-06-07
Public reporting of government funded (public) hospital performance data was mandated in Australia in 2011. Studies suggest some benefit associated with such public reporting, but also considerable scope to improve reporting systems. In 2015, a purposive sample of 41 expert informants were interviewed, representing consumer, provider and purchasers perspectives across Australia's public and private health sectors, to ascertain expert opinion on the utility and impact of public reporting of health service performance. Qualitative data was thematically analysed with a focus on reporting perceived strengths and barriers to public reporting of hospital performance data (PR). Many more weaknesses and barriers to PR were identified than strengths. Barriers were: conceptual (unclear objective, audience and reporting framework); systems-level (including lack of consumer choice, lack of consumer and clinician involvement, jurisdictional barriers, lack of mandate for private sector reporting); technical and resource related (including data complexity, lack of data relevance consistency, rigour); and socio-cultural (including provider resistance to public reporting, poor consumer health literacy, lack of consumer empowerment). Perceptions of the Australian experience of PR highlight important issues in its implementation that can provide lessons for Australia and elsewhere. A considerable weakness of PR in Australia is that the public are often not considered its major audience, resulting in information ineffectually framed to meet the objective of PR informing consumer decision-making about treatment options. Greater alignment is needed between the primary objective of PR, its audience and audience needs; more than one system of PR might be necessary to meet different audience needs and objectives. Further research is required to assess objectively the potency of the barriers to PR suggested by our panel of informants.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
Plescia, Fulvio; Sardo, Pierangelo; Rizzo, Valerio; Cacace, Silvana; Marino, Rosa Anna Maria; Brancato, Anna; Ferraro, Giuseppe; Carletti, Fabio; Cannizzaro, Carla
2014-01-01
Neurosteroids can alter neuronal excitability interacting with specific neurotransmitter receptors, thus affecting several functions such as cognition and emotionality. In this study we investigated, in adult male rats, the effects of the acute administration of pregnenolone-sulfate (PREGS) (10mg/kg, s.c.) on cognitive processes using the Can test, a non aversive spatial/visual task which allows the assessment of both spatial orientation-acquisition and object discrimination in a simple and in a complex version of the visual task. Electrophysiological recordings were also performed in vivo, after acute PREGS systemic administration in order to investigate on the neuronal activation in the hippocampus and the perirhinal cortex. Our results indicate that, PREGS induces an improvement in spatial orientation-acquisition and in object discrimination in the simple and in the complex visual task; the behavioural responses were also confirmed by electrophysiological recordings showing a potentiation in the neuronal activity of the hippocampus and the perirhinal cortex. In conclusion, this study demonstrates that PREGS systemic administration in rats exerts cognitive enhancing properties which involve both the acquisition and utilization of spatial information, and object discrimination memory, and also correlates the behavioural potentiation observed to an increase in the neuronal firing of discrete cerebral areas critical for spatial learning and object recognition. This provides further evidence in support of the role of PREGS in exerting a protective and enhancing role on human memory. Copyright © 2013. Published by Elsevier B.V.
Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems
NASA Technical Reports Server (NTRS)
Esogbue, Augustine O.
1998-01-01
The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of these are in progress in our laboratory while others await additional support. All of these enhancements will improve the attractiveness of the controller as an effective tool for the on line control of an array of complex process environments.
Barta, András; Horváth, Gábor
2003-12-01
The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.
A experiment on radio location of objects in the near-Earth space with VLBI in 2012
NASA Astrophysics Data System (ADS)
Nechaeva, M.; Antipenko, A.; Bezrukovs, V.; Bezrukov, D.; Dementjev, A.; Dugin, N.; Konovalenko, A.; Kulishenko, V.; Liu, X.; Nabatov, A.; Nesteruk, V.; Pupillo, G.; Reznichenko, A.; Salerno, E.; Shmeld, I.; Shulga, O.; Sybiryakova, Y.; Tikhomirov, Yu.; Tkachenko, A.; Volvach, A.; Yang, W.-J.
An experiment on radar location of space debris objects using of the method of VLBI was carried out in April, 2012. The radar VLBI experiment consisted in irradiation of some space debris objects (4 rocket stages and 5 inactive satellites) with a signal of the transmitter with RT-70 in Evpatoria, Ukraine. Reflected signals were received by a complex of radio telescopes in the VLBI mode. The following VLBI stations took part in the observations: Ventspils (RT-32), Urumqi (RT-25), Medicina (RT-32) and Simeiz (RT-22). The experiment included measurements of the Doppler frequency shift and the delay for orbit refining, and measurements of the rotation period and sizes of objects by the amplitudes of output interferometer signals. The cross-correlation of VLBI-data is performed at a correlator NIRFI-4 of Radiophysical Research Institute (Nizhny Novgorod). Preliminary data processing resulted in the series of Doppler frequency shifts, which comprised the information on radial velocities of the objects. Some results of the experiment are presented.
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
Minimal-scan filtered backpropagation algorithms for diffraction tomography.
Pan, X; Anastasio, M A
1999-12-01
The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.
Diffraction efficiency of photothermoplastic layers for the recording of discrete holograms
NASA Technical Reports Server (NTRS)
Koreshev, S. N.; Cherkasov, Yu. A.; Kislovskiy, I. L.
1987-01-01
An experimental and theoretical study of the dependence of eta of a digital phase Fourier hologram of a point object on the amount of deformation delta and the discrete-structure parameters representing the hologram is detailed. An expression is given for eta. Experiments were performed on photothermoplastic layers based on polyvinyl carbazole and trinitrofluorenone charge transfer complexes. The maximum eta, 2%, is found at delta = 0.56 micron.
2011-04-25
must adapt its planning to vehicle size, shape, wheelbase, wheel and axle configuration, the specific obstacle-crossing capabilities of the vehicle...scalability of the ANS is a consequence of making each sensing modality capable of performing reasonable perception tasks while allowing a wider...autonomous system design achieves flexibility by exploiting redundant sensing modalities where possible, and by a decision-making process that
The Marketing Audit as a Method of the Evaluation of the Marketing Plan
NASA Astrophysics Data System (ADS)
Vaňa, Kamil; Černá, Ľubica
2012-12-01
The growing complexity of the current market environment needs a more systematic evaluation process of the organizational marketing performance to deal with the dynamic market. This paper deals with marketing audit as a comprehensive assessment of all angles of marketing operation in an organization and also deals with systematic evaluation of plans, objectives, strategies, activities and organizational structure as well as marketing staff.
Expertise for upright faces improves the precision but not the capacity of visual working memory.
Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank
2014-10-01
Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.
Corbett, Faye; Jefferies, Elizabeth; Ehsan, Sheeba
2009-01-01
Disorders of semantic cognition in different neuropsychological conditions result from diverse areas of brain damage and may have different underlying causes. This study used a comparative case-series design to examine the hypothesis that relatively circumscribed bilateral atrophy of the anterior temporal lobe in semantic dementia (SD) produces a gradual degradation of core semantic representations, whilst a deficit of cognitive control produces multi-modal semantic impairment in a subset of patients with stroke aphasia following damage involving the left prefrontal cortex or regions in and around the temporoparietal area; this condition, which transcends traditional aphasia classifications, is referred to as ‘semantic aphasia’ (SA). There have been very few direct comparisons of these patient groups to date and these previous studies have focussed on verbal comprehension. This study used a battery of object-use tasks to extend this line of enquiry into the non-verbal domain for the first time. A group of seven SA patients were identified who failed both word and picture versions of a semantic association task. These patients were compared with eight SD cases. Both groups showed significant deficits in object use but these impairments were qualitatively different. Item familiarity correlated with performance on object-use tasks for the SD group, consistent with the view that core semantic representations are degrading in this condition. In contrast, the SA participants were insensitive to the familiarity of the objects. Further, while the SD patients performed consistently across tasks that tapped different aspects of knowledge and object use for the same items, the performance of the SA participants reflected the control requirements of the tasks. Single object use was relatively preserved in SA but performance on complex mechanical puzzles was substantially impaired. Similarly, the SA patients were able to complete straightforward item matching tasks, such as word-picture matching, but performed more poorly on associative picture-matching tasks, even when the tests involved the same items. The two groups of patients also showed a different pattern of errors in object use. SA patients made substantial numbers of erroneous intrusions in their demonstrations, such as inappropriate object movements. In contrast, response omissions were more common in SD. This study provides converging evidence for qualitatively different impairments of semantic cognition in SD and SA, and uniquely demonstrates this pattern in a non-verbal expressive domain—object use. PMID:19506072
Petersen, Laura A; Woodard, Lechauncy D; Henderson, Louise M; Urech, Tracy H; Pietz, Kenneth
2009-06-16
There is concern that performance measures, patient ratings of their care, and pay-for-performance programs may penalize healthcare providers of patients with multiple chronic coexisting conditions. We examined the impact of coexisting conditions on the quality of care for hypertension and patient perception of overall quality of their health care. We classified 141 609 veterans with hypertension into 4 condition groups: those with hypertension-concordant (diabetes mellitus, ischemic heart disease, dyslipidemia) and/or -discordant (arthritis, depression, chronic obstructive pulmonary disease) conditions or neither. We measured blood pressure control at the index visit, overall good quality of care for hypertension, including a follow-up interval, and patient ratings of satisfaction with their care. Associations between condition type and number of coexisting conditions on receipt of overall good quality of care were assessed with logistic regression. The relationship between patient assessment and objective measures of quality was assessed. Of the cohort, 49.5% had concordant-only comorbidities, 8.7% had discordant-only comorbidities, 25.9% had both, and 16.0% had none. Odds of receiving overall good quality after adjustment for age were higher for those with concordant comorbidities (odds ratio, 1.78; 95% confidence interval, 1.70 to 1.87), discordant comorbidities (odds ratio, 1.32; 95% confidence interval, 1.23 to 1.41), or both (odds ratio, 2.25; 95% confidence interval, 2.13 to 2.38) compared with neither. Findings did not change after adjustment for illness severity and/or number of primary care and specialty care visits. Patient assessment of quality did not vary by the presence of coexisting conditions and was not related to objective ratings of quality of care. Contrary to expectations, patients with greater complexity had higher odds of receiving high-quality care for hypertension. Subjective ratings of care did not vary with the presence or absence of comorbid conditions. Our findings should be reassuring to those who care for the most medically complex patients and are concerned that they will be penalized by performance measures or patient ratings of their care.
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects
NASA Astrophysics Data System (ADS)
Kuster, Martin; de Vries, Diemer
2006-12-01
It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.
Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach
de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio
2015-01-01
Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653
High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS
NASA Astrophysics Data System (ADS)
Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian
2017-09-01
In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.
Dynamic Simulation of VEGA SRM Bench Firing By Using Propellant Complex Characterization
NASA Astrophysics Data System (ADS)
Di Trapani, C. D.; Mastrella, E.; Bartoccini, D.; Squeo, E. A.; Mastroddi, F.; Coppotelli, G.; Linari, M.
2012-07-01
During the VEGA launcher development, from the 2004 up to now, 8 firing tests have been performed at Salto di Quirra (Sardinia, Italy) and Kourou (Guyana, Fr) with the objective to characterize and qualify of the Zefiros and P80 Solid Rocket Motors (SRM). In fact the VEGA launcher configuration foreseen 3 solid stages based on P80, Z23 and Z9 Solid Rocket Motors respectively. One of the primary objectives of the firing test is to correctly characterize the dynamic response of the SRM in order to apply such a characterization to the predictions and simulations of the VEGA launch dynamic environment. Considering that the solid propellant is around 90% of the SRM mass, it is very important to dynamically characterize it, and to increase the confidence in the simulation of the dynamic levels transmitted to the LV upper part from the SRMs. The activity is articulated in three parts: • consolidation of an experimental method for the dynamic characterization of the complex dynamic elasticity modulus of elasticity of visco-elastic materials applicable to the SRM propellant operative conditions • introduction of the complex dynamic elasticity modulus in a numerical FEM benchmark based on MSC NASTRAN solver • analysis of the effect of the introduction of the complex dynamic elasticity modulus in the Zefiros FEM focusing on experimental firing test data reproduction with numerical approach.
Evaluation of partial coherence correction in X-ray ptychography
Burdet, Nicolas; Shi, Xiaowen; Parks, Daniel; ...
2015-02-23
Coherent X-ray Diffraction Imaging (CDI) and X-ray ptychography both heavily rely on the high degree of spatial coherence of the X-ray illumination for sufficient experimental data quality for reconstruction convergence. Nevertheless, the majority of the available synchrotron undulator sources have a limited degree of partial coherence, leading to reduced data quality and a lower speckle contrast in the coherent diffraction patterns. It is still an open question whether experimentalists should compromise the coherence properties of an X-ray source in exchange for a higher flux density at a sample, especially when some materials of scientific interest are relatively weak scatterers. Amore » previous study has suggested that in CDI, the best strategy for the study of strong phase objects is to maintain a high degree of coherence of the illuminating X-rays because of the broadening of solution space resulting from the strong phase structures. In this article, we demonstrate the first systematic analysis of the effectiveness of partial coherence correction in ptychography as a function of the coherence properties, degree of complexity of illumination (degree of phase diversity of the probe) and sample phase complexity. We have also performed analysis of how well ptychographic algorithms refine X-ray probe and complex coherence functions when those variables are unknown at the start of reconstructions, for noise-free simulated data, in the case of both real-valued and highly-complex objects.« less
Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds
NASA Astrophysics Data System (ADS)
Roynard, X.; Deschaud, J.-E.; Goulette, F.
2016-06-01
Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.
An investigation of reasoning by analogy in schizophrenia and autism spectrum disorder
Krawczyk, Daniel C.; Kandalaft, Michelle R.; Didehbani, Nyaz; Allen, Tandra T.; McClelland, M. Michelle; Tamminga, Carol A.; Chapman, Sandra B.
2014-01-01
Relational reasoning ability relies upon by both cognitive and social factors. We compared analogical reasoning performance in healthy controls (HC) to performance in individuals with Autism Spectrum Disorder (ASD), and individuals with schizophrenia (SZ). The experimental task required participants to find correspondences between drawings of scenes. Participants were asked to infer which item within one scene best matched a relational item within the second scene. We varied relational complexity, presence of distraction, and type of objects in the analogies (living or non-living items). We hypothesized that the cognitive differences present in SZ would reduce relational inferences relative to ASD and HC. We also hypothesized that both SZ and ASD would show lower performance on living item problems relative to HC due to lower social function scores. Overall accuracy was higher for HC relative to SZ, consistent with prior research. Across groups, higher relational complexity reduced analogical responding, as did the presence of non-living items. Separate group analyses revealed that the ASD group was less accurate at making relational inferences in problems that involved mainly non-living items and when distractors were present. The SZ group showed differences in problem type similar to the ASD group. Additionally, we found significant correlations between social cognitive ability and analogical reasoning, particularly for the SZ group. These results indicate that differences in cognitive and social abilities impact the ability to infer analogical correspondences along with numbers of relational elements and types of objects present in the problems. PMID:25191240
Environmentally-Preferable Launch Coatings
NASA Technical Reports Server (NTRS)
Kessel, Kurt R.
2015-01-01
The Ground Systems Development and Operations (GSDO) Program at NASA Kennedy Space Center (KSC), Florida, has the primary objective of modernizing and transforming the launch and range complex at KSC to benefit current and future NASA programs along with other emerging users. Described as the launch support and infrastructure modernization program in the NASA Authorization Act of 2010, the GSDO Program will develop and implement shared infrastructure and process improvements to provide more flexible, affordable, and responsive capabilities to a multi-user community. In support of NASA and the GSDO Program, the objective of this project is to determine the feasibility of environmentally friendly corrosion protecting coatings for launch facilities and ground support equipment (GSE). The focus of the project is corrosion resistance and survivability with the goal to reduce the amount of maintenance required to preserve the performance of launch facilities while reducing mission risk. The project compares coating performance of the selected alternatives to existing coating systems or standards.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
DeCaro, Renee; Peelle, Jonathan E.; Grossman, Murray; Wingfield, Arthur
2016-01-01
Reduced hearing acuity is among the most prevalent of chronic medical conditions among older adults. An experiment is reported in which comprehension of spoken sentences was tested for older adults with good hearing acuity or with a mild-to-moderate hearing loss, and young adults with age-normal hearing. Comprehension was measured by participants’ ability to determine the agent of an action in sentences that expressed this relation with a syntactically less complex subject-relative construction or a syntactically more complex object-relative construction. Agency determination was further challenged by inserting a prepositional phrase into sentences between the person performing an action and the action being performed. As a control, prepositional phrases of equivalent length were also inserted into sentences in a non-disruptive position. Effects on sentence comprehension of age, hearing acuity, prepositional phrase placement and sound level of stimulus presentations appeared only for comprehension of sentences with the more syntactically complex object-relative structures. Working memory as tested by reading span scores accounted for a significant amount of the variance in comprehension accuracy. Once working memory capacity and hearing acuity were taken into account, chronological age among the older adults contributed no further variance to comprehension accuracy. Results are discussed in terms of the positive and negative effects of sensory–cognitive interactions in comprehension of spoken sentences and lend support to a framework in which domain-general executive resources, notably verbal working memory, play a role in both linguistic and perceptual processing. PMID:26973557
A Heuristic Bioinspired for 8-Piece Puzzle
NASA Astrophysics Data System (ADS)
Machado, M. O.; Fabres, P. A.; Melo, J. C. L.
2017-10-01
This paper investigates a mathematical model inspired by nature, and presents a Meta-Heuristic that is efficient in improving the performance of an informed search, when using strategy A * using a General Search Tree as data structure. The work hypothesis suggests that the investigated meta-heuristic is optimal in nature and may be promising in minimizing the computational resources required by an objective-based agent in solving high computational complexity problems (n-part puzzle) as well as In the optimization of objective functions for local search agents. The objective of this work is to describe qualitatively the characteristics and properties of the mathematical model investigated, correlating the main concepts of the A * function with the significant variables of the metaheuristic used. The article shows that the amount of memory required to perform this search when using the metaheuristic is less than using the A * function to evaluate the nodes of a general search tree for the eight-piece puzzle. It is concluded that the meta-heuristic must be parameterized according to the chosen heuristic and the level of the tree that contains the possible solutions to the chosen problem.
Reactive underwater object inspection based on artificial electric sense.
Lebastard, Vincent; Boyer, Frédéric; Lanneau, Sylvain
2016-07-26
Weakly electric fish can perform complex cognitive tasks based on extracting information from blurry electric images projected from their immediate environment onto their electro-sensitive skin. In particular they can be trained to recognize the intrinsic properties of objects such as their shape, size and electric nature. They do this by means of novel perceptual strategies that exploit the relations between the physics of a self-generated electric field, their body morphology and the ability to perform specific movement termed probing motor acts (PMAs). In this article we artificially reproduce and combine these PMAs to build an autonomous control strategy that allows an artificial electric sensor to find electrically contrasted objects, and to orbit around them based on a minimum set of measurements and simple reactive feedback control laws of the probe's motion. The approach does not require any simulation models and could be implemented on an autonomous underwater vehicle (AUV) equipped with artificial electric sense. The AUV has only to satisfy certain simple geometric properties, such as bi-laterally (left/right) symmetrical electrodes and possess a reasonably high aspect (length/width) ratio.
Disrupting frontal eye-field activity impairs memory recall.
Wantz, Andrea L; Martarelli, Corinna S; Cazzoli, Dario; Kalla, Roger; Müri, René; Mast, Fred W
2016-04-13
A large body of research demonstrated that participants preferably look back to the encoding location when retrieving visual information from memory. However, the role of this 'looking back to nothing' is still debated. The goal of the present study was to extend this line of research by examining whether an important area in the cortical representation of the oculomotor system, the frontal eye field (FEF), is involved in memory retrieval. To interfere with the activity of the FEF, we used inhibitory continuous theta burst stimulation (cTBS). Before stimulation was applied, participants encoded a complex scene and performed a short-term (immediately after encoding) or long-term (after 24 h) recall task, just after cTBS over the right FEF or sham stimulation. cTBS did not affect overall performance, but stimulation and statement type (object vs. location) interacted. cTBS over the right FEF tended to impair object recall sensitivity, whereas there was no effect on location recall sensitivity. These findings suggest that the FEF is involved in retrieving object information from scene memory, supporting the hypothesis that the oculomotor system contributes to memory recall.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
Topicality and Complexity in the Acquisition of Norwegian Object Shift
ERIC Educational Resources Information Center
Anderssen, Merete; Bentzen, Kristine; Rodina, Yulia
2012-01-01
This article investigates the acquisition of object shift in Norwegian child language. We show that object shift is complex derivationally, distributionally, and referentially, and propose a new analysis in terms of IP-internal topicalization. The results of an elicited production study with 27 monolingual Norwegian-speaking children (ages…
A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.
Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti
2017-01-01
To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Rotariu, Mariana; Filep, R; Turnea, M; Ilea, M; Arotăriţei, D; Popescu, Marilena
2015-01-01
The prosthetic application is a highly complex process. Modeling and simulation of biomechanics processes in orthopedics is a certainly field of interest in current medical research. Optimization of socket in order to improve the quality of patient's life is a major objective in prosthetic rehabilitation. A variety of numerical methods for prosthetic application have been developed and studied. An objective method is proposed to evaluate the performance of a prosthetic patient according to surface pressure map over the residual limb. The friction coefficient due to various liners used in transtibial and transfemoral prosthesis is taken into account also. Creation of a bio-based modeling and mathematical simulation allows the design, construction and optimization of contact between the prosthesis cup and lack of functionality of the patient amputated considering the data collected and processed in real time and non-invasively. The von Mises stress distribution in muscle flap tissue at the bone ends shows a larger region subjected to elevated von Mises stresses in the muscle tissue underlying longer truncated bones. Finite element method was used to conduct a stress analysis and show the force distribution along the device. The results contribute to a better understanding the design of an optimized prosthesis that increase the patient's performance along with a god choice of liner, made by an appropriate material that fit better to a particular blunt. The study of prosthetic application is an exciting and important topic in research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical orthopedics.
Attention and reach-to-grasp movements in Parkinson's disease.
Lu, Cathy; Bharmal, Aamir; Kiss, Zelma H; Suchowersky, Oksana; Haffenden, Angela M
2010-08-01
The role of attention in grasping movements directed at common objects has not been examined in Parkinson's disease (PD), though these movements are critical to activities of daily living. Our primary objective was to determine whether patients with PD demonstrate automaticity in grasping movements directed toward common objects. Automaticity is assumed when tasks can be performed with little or no interference from concurrent tasks. Grasping performance in three patient groups (newly diagnosed, moderate, and advanced/surgically treated PD) on and off of their medication or deep brain stimulation was compared to performance in an age-matched control group. Automaticity was demonstrated by the absence of a decrement in grasping performance when attention was consumed by a concurrent spatial-visualization task. Only the control group and newly diagnosed PD group demonstrated automaticity in their grasping movements. The moderate and advanced PD groups did not demonstrate automaticity. Furthermore, the well-known effects of pharmacotherapy and surgical intervention on movement speed and muscle activation patterns did not appear to reduce the impact of attention-demanding tasks on grasping movements in those with moderate to advanced PD. By the moderate stage of PD, grasping is an attention-demanding process; this change is not ameliorated by dopaminergic or surgical treatments. These findings have important implications for activities of daily living, as devoting attention to the simplest of daily tasks would interfere with complex activities and potentially exacerbate fatigue.
Motor cortical activity changes during neuroprosthetic-controlled object interaction.
Downey, John E; Brane, Lucas; Gaunt, Robert A; Tyler-Kabara, Elizabeth C; Boninger, Michael L; Collinger, Jennifer L
2017-12-05
Brain-computer interface (BCI) controlled prosthetic arms are being developed to restore function to people with upper-limb paralysis. This work provides an opportunity to analyze human cortical activity during complex tasks. Previously we observed that BCI control became more difficult during interactions with objects, although we did not quantify the neural origins of this phenomena. Here, we investigated how motor cortical activity changed in the presence of an object independently of the kinematics that were being generated using intracortical recordings from two people with tetraplegia. After identifying a population-wide increase in neural firing rates that corresponded with the hand being near an object, we developed an online scaling feature in the BCI system that operated without knowledge of the task. Online scaling increased the ability of two subjects to control the robotic arm when reaching to grasp and transport objects. This work suggests that neural representations of the environment, in this case the presence of an object, are strongly and consistently represented in motor cortex but can be accounted for to improve BCI performance.
The Impact of Different Environmental Conditions on Cognitive Function: A Focused Review
Taylor, Lee; Watkins, Samuel L.; Marshall, Hannah; Dascombe, Ben J.; Foster, Josh
2016-01-01
Cognitive function defines performance in objective tasks that require conscious mental effort. Extreme environments, namely heat, hypoxia, and cold can all alter human cognitive function due to a variety of psychological and/or biological processes. The aims of this Focused Review were to discuss; (1) the current state of knowledge on the effects of heat, hypoxic and cold stress on cognitive function, (2) the potential mechanisms underpinning these alterations, and (3) plausible interventions that may maintain cognitive function upon exposure to each of these environmental stressors. The available evidence suggests that the effects of heat, hypoxia, and cold stress on cognitive function are both task and severity dependent. Complex tasks are particularly vulnerable to extreme heat stress, whereas both simple and complex task performance appear to be vulnerable at even at moderate altitudes. Cold stress also appears to negatively impact both simple and complex task performance, however, the research in this area is sparse in comparison to heat and hypoxia. In summary, this focused review provides updated knowledge regarding the effects of extreme environmental stressors on cognitive function and their biological underpinnings. Tyrosine supplementation may help individuals maintain cognitive function in very hot, hypoxic, and/or cold conditions. However, more research is needed to clarify these and other postulated interventions. PMID:26779029
Park, George D; Reed, Catherine L
2015-02-01
Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.
Quantitative assessment of 12-lead ECG synthesis using CAVIAR.
Scherer, J A; Rubel, P; Fayn, J; Willems, J L
1992-01-01
The objective of this study is to assess the performance of patient-specific segment-specific (PSSS) synthesis in QRST complexes using CAVIAR, a new method of the serial comparison for electrocardiograms and vectorcardiograms. A collection of 250 multi-lead recordings from the Common Standards for Quantitative Electrocardiography (CSE) diagnostic pilot study is employed. QRS and ST-T segments are independently synthesized using the PSSS algorithm so that the mean-squared error between the original and estimated waveforms is minimized. CAVIAR compares the recorded and synthesized QRS and ST-T segments and calculates the mean-quadratic deviation as a measure of error. The results of this study indicate that estimated QRS complexes are good representatives of their recorded counterparts, and the integrity of the spatial information is maintained by the PSSS synthesis process. Analysis of the ST-T segments suggests that the deviations between recorded and synthesized waveforms are considerably greater than those associated with the QRS complexes. The poorer performance of the ST-T segments is attributed to magnitude normalization of the spatial loops, low-voltage passages, and noise interference. Using the mean-quadratic deviation and CAVIAR as methods of performance assessment, this study indicates that the PSSS-synthesis algorithm accurately maintains the signal information within the 12-lead electrocardiogram.
High Density Hydrogen Storage System Demonstration Using NaAlH4 Based Complex Compound Hydrides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel A. Mosher; Xia Tang; Ronald J. Brown
2007-07-27
This final report describes the motivations, activities and results of the hydrogen storage independent project "High Density Hydrogen Storage System Demonstration Using NaAlH4 Based Complex Compound Hydrides" performed by the United Technologies Research Center under the Department of Energy Hydrogen Program, contract # DE-FC36-02AL67610. The objectives of the project were to identify and address the key systems technologies associated with applying complex hydride materials, particularly ones which differ from those for conventional metal hydride based storage. This involved the design, fabrication and testing of two prototype systems based on the hydrogen storage material NaAlH4. Safety testing, catalysis studies, heat exchangermore » optimization, reaction kinetics modeling, thermochemical finite element analysis, powder densification development and material neutralization were elements included in the effort.« less
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Mohan Pandey, Hari
2017-08-01
Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.
Parikh, Pranav J; Cole, Kelly J
2015-01-01
The contribution of poor finger force control to age-related decline in manual dexterity is above and beyond ubiquitous behavioral slowing. Altered control of the finger forces can impart unwanted torque on the object affecting its orientation, thus impairing manual performance. Anodal transcranial direct current stimulation (tDCS) over primary motor cortex (M1) has been shown to improve the performance speed on manual tasks in older adults. However, the effects of anodal tDCS over M1 on the finger force control during object manipulation in older adults remain to be fully explored. Here we determined the effects of anodal tDCS over M1 on the control of grip force in older adults while they manipulated an object with an uncertain mechanical property. Eight healthy older adults were instructed to grip and lift an object whose contact surfaces were unexpectedly made more or less slippery across trials using acetate and sandpaper surfaces, respectively. Subjects performed this task before and after receiving anodal or sham tDCS over M1 on two separate sessions using a cross-over design. We found that older adults used significantly lower grip force following anodal tDCS compared to sham tDCS. Friction measured at the finger-object interface remained invariant after anodal and sham tDCS. These findings suggest that anodal tDCS over M1 improved the control of grip force during object manipulation in healthy older adults. Although the cortical networks for representing objects and manipulative actions are complex, the reduction in grip force following anodal tDCS over M1 might be due to a cortical excitation yielding improved processing of object-specific sensory information and its integration with the motor commands for production of manipulative forces. Our findings indicate that tDCS has a potential to improve the control of finger force during dexterous manipulation in older adults.
NASA Astrophysics Data System (ADS)
Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.
2018-04-01
The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.
Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke
2017-01-01
Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colors, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgments remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition. PMID:29062291
Stereotyped behavior of severely disabled children in classroom and free-play settings.
Thompson, T J; Berkson, G
1985-05-01
The relationships between stereotyped behavior, object manipulation, self-manipulation, teacher attention, and various developmental measures were examined in 101 severely developmentally disabled children in their classrooms and a free-play setting. Stereotyped behavior without objects was positively correlated with self-manipulation and CA and was negatively correlated with complex object manipulation, developmental age, developmental quotient, and teacher attention. Stereotyped behavior with objects was negatively correlated with complex object manipulation. Partial correlations showed that age, self-manipulation, and developmental age shared unique variance with stereotyped behavior without objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.
The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less
Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan
2017-07-01
The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.
Range 7 Scanner Integration with PaR Robot Scanning System
NASA Technical Reports Server (NTRS)
Schuler, Jason; Burns, Bradley; Carlson, Jeffrey; Minich, Mark
2011-01-01
An interface bracket and coordinate transformation matrices were designed to allow the Range 7 scanner to be mounted on the PaR Robot detector arm for scanning the heat shield or other object placed in the test cell. A process was designed for using Rapid Form XOR to stitch data from multiple scans together to provide an accurate 3D model of the object scanned. An accurate model was required for the design and verification of an existing heat shield. The large physical size and complex shape of the heat shield does not allow for direct measurement of certain features in relation to other features. Any imaging devices capable of imaging the entire heat shield in its entirety suffers a reduced resolution and cannot image sections that are blocked from view. Prior methods involved tools such as commercial measurement arms, taking images with cameras, then performing manual measurements. These prior methods were tedious and could not provide a 3D model of the object being scanned, and were typically limited to a few tens of measurement points at prominent locations. Integration of the scanner with the robot allows for large complex objects to be scanned at high resolution, and for 3D Computer Aided Design (CAD) models to be generated for verification of items to the original design, and to generate models of previously undocumented items. The main components are the mounting bracket for the scanner to the robot and the coordinate transformation matrices used for stitching the scanner data into a 3D model. The steps involve mounting the interface bracket to the robot's detector arm, mounting the scanner to the bracket, and then scanning sections of the object and recording the location of the tool tip (in this case the center of the scanner's focal point). A novel feature is the ability to stitch images together by coordinates instead of requiring each scan data set to have overlapping identifiable features. This setup allows models of complex objects to be developed even if the object is large and featureless, or has sections that don't have visibility to other parts of the object for use as a reference. In addition, millions of points can be used for creation of an accurate model [i.e. within 0.03 in. (=0.8 mm) over a span of 250 in. (=635 mm)].
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Xia-zhu; Xu, Ya-wei
2017-11-01
On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.
Representing and Learning Complex Object Interactions
Zhou, Yilun; Konidaris, George
2017-01-01
We present a framework for representing scenarios with complex object interactions, in which a robot cannot directly interact with the object it wishes to control, but must instead do so via intermediate objects. For example, a robot learning to drive a car can only indirectly change its pose, by rotating the steering wheel. We formalize such complex interactions as chains of Markov decision processes and show how they can be learned and used for control. We describe two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game, and using a hot water dispenser to heat a cup of water. PMID:28593181
2012-01-01
Background Catching an object is a complex movement that involves not only programming but also effective motor coordination. Such behavior is related to the activation and recruitment of cortical regions that participates in the sensorimotor integration process. This study aimed to elucidate the cortical mechanisms involved in anticipatory actions when performing a task of catching an object in free fall. Methods Quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 20 healthy right-handed participants performed the catching ball task. We used the EEG coherence analysis to investigate subdivisions of alpha (8-12 Hz) and beta (12-30 Hz) bands, which are related to cognitive processing and sensory-motor integration. Results Notwithstanding, we found the main effects for the factor block; for alpha-1, coherence decreased from the first to sixth block, and the opposite effect occurred for alpha-2 and beta-2, with coherence increasing along the blocks. Conclusion It was concluded that to perform successfully our task, which involved anticipatory processes (i.e. feedback mechanisms), subjects exhibited a great involvement of sensory-motor and associative areas, possibly due to organization of information to process visuospatial parameters and further catch the falling object. PMID:22364485
NASA Astrophysics Data System (ADS)
Luo, Yugong; Chen, Tao; Li, Keqiang
2015-12-01
The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Measurement Via Optical Near-Nulling and Subaperture Stitching
NASA Technical Reports Server (NTRS)
Forbes, Greg; De Vries, Gary; Murphy, Paul; Brophy, Chris
2012-01-01
A subaperture stitching interferometer system provides near-nulling of a subaperture wavefront reflected from an object of interest over a portion of a surface of the object. A variable optical element located in the radiation path adjustably provides near-nulling to facilitate stitching of subaperture interferograms, creating an interferogram representative of the entire surface of interest. This enables testing of aspheric surfaces without null optics customized for each surface prescription. The surface shapes of objects such as lenses and other precision components are often measured with interferometry. However, interferometers have a limited capture range, and thus the test wavefront cannot be too different from the reference or the interference cannot be analyzed. Furthermore, the performance of the interferometer is usually best when the test and reference wavefronts are nearly identical (referred to as a null condition). Thus, it is necessary when performing such measurements to correct for known variations in shape to ensure that unintended variations are within the capture range of the interferometer and accurately measured. This invention is a system for nearnulling within a subaperture stitching interferometer, although in principle, the concept can be employed by wavefront measuring gauges other than interferometers. The system employs a light source for providing coherent radiation of a subaperture extent. An object of interest is placed to modify the radiation (e.g., to reflect or pass the radiation), and a variable optical element is located to interact with, and nearly null, the affected radiation. A detector or imaging device is situated to obtain interference patterns in the modified radiation. Multiple subaperture interferograms are taken and are stitched, or joined, to provide an interferogram representative of the entire surface of the object of interest. The primary aspect of the invention is the use of adjustable corrective optics in the context of subaperture stitching near-nulling interferometry, wherein a complex surface is analyzed via multiple, separate, overlapping interferograms. For complex surfaces, the problem of managing the identification and placement of corrective optics becomes even more pronounced, to the extent that in most cases the null corrector optics are specific to the particular asphere prescription and no others (i.e. another asphere requires completely different null correction optics). In principle, the near-nulling technique does not require subaperture stitching at all. Building a near-null system that is practically useful relies on two key features: simplicity and universality. If the system is too complex, it will be difficult to calibrate and model its manufacturing errors, rendering it useless as a precision metrology tool and/or prohibitively expensive. If the system is not applicable to a wide range of test parts, then it does not provide significant value over conventional null-correction technology. Subaperture stitching enables simpler and more universal near-null systems to be effective, because a fraction of a surface is necessarily less complex than the whole surface (excepting the extreme case of a fractal surface description). The technique of near-nulling can significantly enhance aspheric subaperture stitching capability by allowing the interferometer to capture a wider range of aspheres. More over, subaperture stitching is essential to a truly effective near-nulling system, since looking at a fraction of the surface keeps the wavefront complexity within the capability of a relatively simple nearnull apparatus. Furthermore, by reducing the subaperture size, the complexity of the measured wavefront can be reduced until it is within the capability of the near-null design.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Mastery motivation in children with complex communication needs: longitudinal data analysis.
Medeiros, Kara F; Cress, Cynthia J; Lambert, Matthew C
2016-09-01
This study compared longitudinal changes in mastery motivation during parent-child free play for 37 children with complex communication needs. Mastery motivation manifests as a willingness to work hard at tasks that are challenging, which is an important quality to overcoming the challenges involved in successful expressive communication using AAC. Unprompted parent-child play episodes were identified in three assessment sessions over an 18-month period and coded for nine categories of mastery motivation in social and object play. All of the object-oriented mastery motivation categories and one social mastery motivation category showed an influence of motor skills after controlling for receptive language. Object play elicited significantly more of all of the object-focused mastery motivation categories than social play, and social play elicited more of one type of social-focused mastery motivation behavior than object play. Mastery motivation variables did not differ significantly over time for children. Potential physical and interpersonal influences on mastery motivation for parents and children with complex communication needs are discussed, including broadening the procedures and definitions of mastery motivation beyond object-oriented measurements for children with complex communication needs.
Function follows form: combining nanoimprint and inkjet printing
NASA Astrophysics Data System (ADS)
Muehlberger, M.; Haslinger, M. J.; Kurzmann, J.; Ikeda, M.; Fuchsbauer, A.; Faury, T.; Koepplmayr, T.; Ausserhuber, H.; Kastner, J.; Woegerer, C.; Fechtig, D.
2017-06-01
We are investigating the possibilities and the technical requirements to do nanopatterning on arbitrary curved surfaces. This is done considering the opportunities and possibilities of additive manufacturing. One of the key elements is the necessity to deposit material in well-defined areas of various complex 3D objects. In order to achieve this we are developing a robot-based inkjet printing. We report on our progress with this respect and also on our efforts to perform nanoimprinting on curved, possibly 3D-printed objects using materials that can be deposited by inkjet printing. In the framework of this article, we provide an overview over our current status, the challenges and an outlook.
Zapata-Fonseca, Leonardo; Dotov, Dobromir; Fossion, Ruben; Froese, Tom
2016-01-01
There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time-series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment). Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e., elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous). This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor) of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to non-verbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible. PMID:28018274
Characterization of real objects by an active electrolocation sensor
NASA Astrophysics Data System (ADS)
Metzen, Michael G.; Al Ghouz, Imène; Krueger, Sandra; Bousack, Herbert; von der Emde, Gerhard
2012-04-01
Weakly electric fish use a process called 'active electrolocation' to orientate in their environment and to localize objects based on their electrical properties. To do so, the fish discharge an electric organ which emits brief electrical current pulses (electric organ discharge, EOD) and in return sense the generated electric field which builds up surrounding the animal. Caused by the electrical properties of nearby objects, fish measure characteristic signal modulations with an array of electroreceptors in their skin. The fish are able to gain important information about the geometrical properties of an object as well as its complex impedance and its distance. Thus, active electrolocation is an interesting feature to be used in biomimetic approaches. We used this sensory principle to identify different insertions in the walls of Plexiglas tubes. The insertions tested were composed of aluminum, brass and graphite in sizes between 3 and 20 mm. A carrier signal was emitted and perceived with the poles of a commercial catheter for medical diagnostics. Measurements were performed with the poles separated by 6.3 to 55.3 mm. Depending on the length of the insertion in relation to the sender-receiver distance, we observed up to three peaks in the measured electric images. The first peak was affected by the material of the insertion, while the distance between the second and third peak strongly correlated with the length of the insertion. In a second experiment we tested whether various materials could be detected by using signals of different frequency compositions. Based on their electric images we were able to discriminate between objects having different resistive properties, but not between objects of complex impedances.
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
PyMCT: A Very High Level Language Coupling Tool For Climate System Models
NASA Astrophysics Data System (ADS)
Tobis, M.; Pierrehumbert, R. T.; Steder, M.; Jacob, R. L.
2006-12-01
At the Climate Systems Center of the University of Chicago, we have been examining strategies for applying agile programming techniques to complex high-performance modeling experiments. While the "agile" development methodology differs from a conventional requirements process and its associated milestones, the process remain a formal one. It is distinguished by continuous improvement in functionality, large numbers of small releases, extensive and ongoing testing strategies, and a strong reliance on very high level languages (VHLL). Here we report on PyMCT, which we intend as a core element in a model ensemble control superstructure. PyMCT is a set of Python bindings for MCT, the Fortran-90 based Model Coupling Toolkit, which forms the infrastructure for the inter-component communication in the Community Climate System Model (CCSM). MCT provides a scalable model communication infrastructure. In order to take maximum advantage of agile software development methodologies, we exposed MCT functionality to Python, a prominent VHLL. We describe how the scalable architecture of MCT allows us to overcome the relatively weak runtime performance of Python, so that the performance of the combined system is not severely impacted. To demonstrate these advantages, we reimplemented the CCSM coupler in Python. While this alone offers no new functionality, it does provide a rigorous test of PyMCT functionality and performance. We reimplemented the CPL6 library, presenting an interesting case study of the comparison between conventional Fortran-90 programming and the higher abstraction level provided by a VHLL. The powerful abstractions provided by Python will allow much more complex experimental paradigms. In particular, we hope to build on the scriptability of our coupling strategy to enable systematic sensitivity tests. Our most ambitious objective is to combine our efforts with Bayesian inverse modeling techniques toward objective tuning at the highest level, across model architectures.
Connecting a cognitive architecture to robotic perception
NASA Astrophysics Data System (ADS)
Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial
2012-06-01
We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.
Fu, Qiushi; Santello, Marco
2018-01-01
The concept of postural synergies of the human hand has been shown to potentially reduce complexity in the neuromuscular control of grasping. By merging this concept with soft robotics approaches, a multi degrees of freedom soft-synergy prosthetic hand [SoftHand-Pro (SHP)] was created. The mechanical innovation of the SHP enables adaptive and robust functional grasps with simple and intuitive myoelectric control from only two surface electromyogram (sEMG) channels. However, the current myoelectric controller has very limited capability for fine control of grasp forces. We addressed this challenge by designing a hybrid-gain myoelectric controller that switches control gains based on the sensorimotor state of the SHP. This controller was tested against a conventional single-gain (SG) controller, as well as against native hand in able-bodied subjects. We used the following tasks to evaluate the performance of grasp force control: (1) pick and place objects with different size, weight, and fragility levels using power or precision grasp and (2) squeezing objects with different stiffness. Sensory feedback of the grasp forces was provided to the user through a non-invasive, mechanotactile haptic feedback device mounted on the upper arm. We demonstrated that the novel hybrid controller enabled superior task completion speed and fine force control over SG controller in object pick-and-place tasks. We also found that the performance of the hybrid controller qualitatively agrees with the performance of native human hands. PMID:29375360
Automatic target recognition and detection in infrared imagery under cluttered background
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.
2017-10-01
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
An object-oriented class library for medical software development.
O'Kane, K C; McColligan, E E
1996-12-01
The objective of this research is the development of a Medical Object Library (MOL) consisting of reusable, inheritable, portable, extendable C++ classes that facilitate rapid development of medical software at reduced cost and increased functionality. The result of this research is a library of class objects that range in function from string and hierarchical file handling entities to high level, procedural agents that perform increasingly complex, integrated tasks. A system built upon these classes is compatible with any other system similarly constructed with respect to data definitions, semantics, data organization and storage. As new objects are built, they can be added to the class library for subsequent use. The MOL is a toolkit of software objects intended to support a common file access methodology, a unified medical record structure, consistent message processing, standard graphical display facilities and uniform data collection procedures. This work emphasizes the relationship that potentially exists between the structure of a hierarchical medical record and procedural language components by means of a hierarchical class library and tree structured file access facility. In doing so, it attempts to establish interest in and demonstrate the practicality of the hierarchical medical record model in the modern context of object oriented programming.
Weis, Cleo-Aron; Grießmann, Benedict Walter; Scharff, Christoph; Detzner, Caecilia; Pfister, Eva; Marx, Alexander; Zoellner, Frank Gerrit
2015-09-02
Immunohistochemical analysis of cellular interactions in the bone marrow in situ is demanding, due to its heterogeneous cellular composition, the poor delineation and overlap of functional compartments and highly complex immunophenotypes of several cell populations (e.g. regulatory T-cells) that require immunohistochemical marker sets for unambiguous characterization. To overcome these difficulties, we herein present an approach to describe objects (e.g. cells, bone trabeculae) by a scalar field that can be propagated through registered images of serial histological sections. The transformation of objects within images (e.g. cells) to a scalar field was performed by convolution of the object's centroids with differently formed radial basis function (e.g. for direct or indirect spatial interaction). On the basis of such a scalar field, a summation field described distributed objects within an image. After image registration i) colocalization analysis could be performed on basis scalar field, which is propagated through registered images, and - due to the shape of the field - were barely prone to matching errors and morphological changes by different cutting levels; ii) furthermore, depending on the field shape the colocalization measurements could also quantify spatial interaction (e.g. direct or paracrine cellular contact); ii) the field-overlap, which represents the spatial distance, of different objects (e.g. two cells) could be calculated by the histogram intersection. The description of objects (e.g. cells, cell clusters, bone trabeculae etc.) as a field offers several possibilities: First, co-localization of different markers (e.g. by immunohistochemical staining) in serial sections can be performed in an automatic, objective and quantifiable way. In contrast to multicolour staining (e.g. 10-colour immunofluorescence) the financial and technical requirements are fairly minor. Second, the approach allows searching for different types of spatial interactions (e.g. direct and indirect cellular interaction) between objects by taking field shape into account (e.g. thin vs. broad). Third, by describing spatially distributed groups of objects as summation field, it gives cluster definition that relies rather on the bare object distance than on the modelled spatial cellular interaction.
Fakhari, Ashraf; Jalilian, Amir R.; Yousefnia, Hassan; Shanehsazzadeh, Saeed; Samani, Ali Bahrami; Daha, Fariba Johari; Ardestani, Mehdi Shafiee; Khalaj, Ali
2015-01-01
Objective: Optimized production and quality control of ytterbium-175 (Yb-175) labeled pamidronate and alendronate complexes as efficient agents for bone pain palliation has been presented. Methods: Yb-175 labeled pamidronate and alendronate (175Yb-PMD and 175Yb-ALN) complexes were prepared successfully at optimized conditions with acceptable radiochemical purity, stability and significant hydroxyapatite absorption. The biodistribution of complexes were evaluated up to 48 h, which demonstrated significant bone uptake ratios for 175Yb-PAM at all-time intervals. It was also detected that 175Yb-PAM mostly washed out and excreted through the kidneys. Results: The performance of 175Yb-PAM in an animal model was better or comparable to other 175Yb-bone seeking complexes previously reported. Conclusion: Based on calculations, the total body dose for 175Yb-ALN is 40% higher as compared to 175Yb-PAM (especially kidneys) indicating that 175Yb-PAM is probably a safer agent than 175Yb-ALN. PMID:27529886
VaST: A variability search toolkit
NASA Astrophysics Data System (ADS)
Sokolovsky, K. V.; Lebedev, A. A.
2018-01-01
Variability Search Toolkit (VaST) is a software package designed to find variable objects in a series of sky images. It can be run from a script or interactively using its graphical interface. VaST relies on source list matching as opposed to image subtraction. SExtractor is used to generate source lists and perform aperture or PSF-fitting photometry (with PSFEx). Variability indices that characterize scatter and smoothness of a lightcurve are computed for all objects. Candidate variables are identified as objects having high variability index values compared to other objects of similar brightness. The two distinguishing features of VaST are its ability to perform accurate aperture photometry of images obtained with non-linear detectors and handle complex image distortions. The software has been successfully applied to images obtained with telescopes ranging from 0.08 to 2.5 m in diameter equipped with a variety of detectors including CCD, CMOS, MIC and photographic plates. About 1800 variable stars have been discovered with VaST. It is used as a transient detection engine in the New Milky Way (NMW) nova patrol. The code is written in C and can be easily compiled on the majority of UNIX-like systems. VaST is free software available at http://scan.sai.msu.ru/vast/.
Integrating indigenous livelihood and lifestyle objectives in managing a natural resource
Plagányi, Éva Elizabeth; van Putten, Ingrid; Hutton, Trevor; Deng, Roy A.; Dennis, Darren; Pascoe, Sean; Skewes, Tim; Campbell, Robert A.
2013-01-01
Evaluating the success of natural resource management approaches requires methods to measure performance against biological, economic, social, and governance objectives. In fisheries, most research has focused on industrial sectors, with the contributions to global resource use by small-scale and indigenous hunters and fishers undervalued. Globally, the small-scale fisheries sector alone employs some 38 million people who share common challenges in balancing livelihood and lifestyle choices. We used as a case study a fishery with both traditional indigenous and commercial sectors to develop a framework to bridge the gap between quantitative bio-economic models and more qualitative social analyses. For many indigenous communities, communalism rather than capitalism underlies fishers’ perspectives and aspirations, and we find there are complicated and often unanticipated trade-offs between economic and social objectives. Our results highlight that market-based management options might score highly in a capitalistic society, but have negative repercussions on community coherence and equity in societies with a strong communal ethic. There are complex trade-offs between economic indicators, such as profit, and social indicators, such as lifestyle preferences. Our approach makes explicit the “triple bottom line” sustainability objectives involving trade-offs between economic, social, and biological performance, and is thus directly applicable to most natural resource management decision-making situations. PMID:23401546
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
Integrating indigenous livelihood and lifestyle objectives in managing a natural resource.
Plagányi, Éva Elizabeth; van Putten, Ingrid; Hutton, Trevor; Deng, Roy A; Dennis, Darren; Pascoe, Sean; Skewes, Tim; Campbell, Robert A
2013-02-26
Evaluating the success of natural resource management approaches requires methods to measure performance against biological, economic, social, and governance objectives. In fisheries, most research has focused on industrial sectors, with the contributions to global resource use by small-scale and indigenous hunters and fishers undervalued. Globally, the small-scale fisheries sector alone employs some 38 million people who share common challenges in balancing livelihood and lifestyle choices. We used as a case study a fishery with both traditional indigenous and commercial sectors to develop a framework to bridge the gap between quantitative bio-economic models and more qualitative social analyses. For many indigenous communities, communalism rather than capitalism underlies fishers' perspectives and aspirations, and we find there are complicated and often unanticipated trade-offs between economic and social objectives. Our results highlight that market-based management options might score highly in a capitalistic society, but have negative repercussions on community coherence and equity in societies with a strong communal ethic. There are complex trade-offs between economic indicators, such as profit, and social indicators, such as lifestyle preferences. Our approach makes explicit the "triple bottom line" sustainability objectives involving trade-offs between economic, social, and biological performance, and is thus directly applicable to most natural resource management decision-making situations.
Towards a Next-Generation Catalogue Cross-Match Service
NASA Astrophysics Data System (ADS)
Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium
2015-09-01
We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR-tree). Quantitative tests in comparison with the basic cross-match will be presented.
Kawa, Rafał; Pisula, Ewa
2010-01-01
There have been ambiguous accounts of exploration in children with intellectual disabilities with respect to the course of that exploration, and in particular the relationship between the features of explored objects and exploratory behaviour. It is unclear whether reduced exploratory activity seen with object exploration but not with locomotor activity is autism-specific or if it is also present in children with other disabilities. The purpose of the present study was to compare preschool children with autism with their peers with Down syndrome and typical development in terms of locomotor activity and object exploration and to determine whether the complexity of explored objects affects the course of exploration activity in children with autism. In total there were 27 children in the study. The experimental room was divided into three zones equipped with experimental objects providing visual stimulation of varying levels of complexity. Our results indicate that children with autism and Down syndrome differ from children with typical development in terms of some measures of object exploration (i.e. looking at objects) and time spent in the zone with the most visually complex objects.
Impairments in part-whole representations of objects in two cases of integrative visual agnosia.
Behrmann, Marlene; Williams, Pepper
2007-10-01
How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.
Salient Object Detection via Structured Matrix Decomposition.
Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J
2016-05-04
Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.
Wright, Regina S; Cole, Angela P; Ali, Mana K; Skinner, Jeannine; Whitfield, Keith E; Mwendwa, Denée T
2016-02-01
The objectives of the study were to examine whether measures of total obesity (body mass index [BMI]) and central obesity (waist circumference [WC] and waist-to-hip ratio [WHR]) are associated with cognitive function in African Americans, and whether sex moderates these associations. A sample of 194 African Americans, with a mean age of 58.97 years, completed a battery of cognitive tests and a self-reported health questionnaire. Height, weight, waist and hip circumference, and blood pressure were assessed. Linear regression analyses were run. Results suggested lower performance on measures of verbal fluency and complex attention/cognitive flexibility was accounted for by higher levels of central adiposity. Among men, higher WHR was more strongly related to complex attention/cognitive flexibility performance, but for women, WC was a salient predictor. Higher BMI was associated with poorer verbal memory performance among men, but poorer nonverbal memory performance among women. Findings suggest a need for healthy lifestyle interventions for African Americans to maintain healthy weight and cognitive function. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multicore Architecture-aware Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasa, Avinash
Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less
Recognition of surgical skills using hidden Markov models
NASA Astrophysics Data System (ADS)
Speidel, Stefanie; Zentek, Tom; Sudra, Gunther; Gehrig, Tobias; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2009-02-01
Minimally invasive surgery is a highly complex medical discipline and can be regarded as a major breakthrough in surgical technique. A minimally invasive intervention requires enhanced motor skills to deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To recognize and analyze the current situation for context-aware assistance, we need intraoperative sensor data and a model of the intervention. Characteristics of a situation are the performed activity, the used instruments, the surgical objects and the anatomical structures. Important information about the surgical activity can be acquired by recognizing the surgical gesture performed. Surgical gestures in minimally invasive surgery like cutting, knot-tying or suturing are here referred to as surgical skills. We use the motion data from the endoscopic instruments to classify and analyze the performed skill and even use it for skill evaluation in a training scenario. The system uses Hidden Markov Models (HMM) to model and recognize a specific surgical skill like knot-tying or suturing with an average recognition rate of 92%.
Lyu, Nengchao; Xie, Lian; Wu, Chaozhong; Fu, Qiang; Deng, Chao
2017-01-01
Complex traffic situations and high driving workload are the leading contributing factors to traffic crashes. There is a strong correlation between driving performance and driving workload, such as visual workload from traffic signs on highway off-ramps. This study aimed to evaluate traffic safety by analyzing drivers’ behavior and performance under the cognitive workload in complex environment areas. First, the driving workload of drivers was tested based on traffic signs with different quantities of information. Forty-four drivers were recruited to conduct a traffic sign cognition experiment under static controlled environment conditions. Different complex traffic signs were used for applying the cognitive workload. The static experiment results reveal that workload is highly related to the amount of information on traffic signs and reaction time increases with the information grade, while driving experience and gender effect are not significant. This shows that the cognitive workload of subsequent driving experiments can be controlled by the amount of information on traffic signs; Second, driving characteristics and driving performance were analyzed under different secondary task driving workload levels using a driving simulator. Drivers were required to drive at the required speed on a designed highway off-ramp scene. The cognitive workload was controlled by reading traffic signs with different information, which were divided into four levels. Drivers had to make choices by pushing buttons after reading traffic signs. Meanwhile, the driving performance information was recorded. Questionnaires on objective workload were collected right after each driving task. The results show that speed maintenance and lane deviations are significantly different under different levels of cognitive workload, and the effects of driving experience and gender groups are significant. The research results can be used to analyze traffic safety in highway environments, while considering more drivers’ cognitive and driving performance. PMID:28218696
Solovieva, Anna B; Kardumian, Valeria V; Aksenova, Nadezhda A; Belovolova, Lyudmila V; Glushkov, Mikhail V; Bezrukov, Evgeny A; Sukhanov, Roman B; Kotova, Svetlana L; Timashev, Peter S
2018-05-23
By the example of a model process of tryptophan photooxidation in the aqueous medium in the presence of a three-component photosensitizing complex (porphyrin photosensitizer-polyvinylpyrrolidone- chitosan, PPS-PVP-CT) in the temperature range of 20-40 °С, we have demonstrated a possibility of modification of such a process by selecting different molar ratios of the components in the reaction mixture. The actual objective of this selection is the formation of a certain PPS-PVP-CT composition in which PVP macromolecules would coordinate with PPS molecules and at the same time practically block the complex binding of PPS molecules with chitosan macromolecules. Such blocking allows utilization of the bactericidal properties of chitosan to a greater extent, since chitosan is known to depress the PPS photosensitizing activity in PPS-PVP-CT complexes when using those in photodynamic therapy (PDT). The optimal composition of photosensitizing complexes appears to be dependent on the temperature at which the PDT sessions are performed. We have analyzed the correlations of the effective rate constants of tryptophan photooxidation with the photophysical characteristics of the formed complexes.
Gerstmann-Straüssler-Scheinker disease
Jones, Matthew; Odunsi, Sola; du Plessis, Daniel; Vincent, Angela; Bishop, Matthew; Head, Mark W.; Ironside, James W.
2014-01-01
Objective: To describe a unique case of Gerstmann-Straüssler-Scheinker (GSS) disease caused by a novel prion protein (PRNP) gene mutation and associated with strongly positive voltage-gated potassium channel (VGKC)-complex antibodies (Abs). Methods: Clinical data were gathered from retrospective review of the case notes. Postmortem neuropathologic examination was performed, and DNA was extracted from frozen brain tissue for full sequence analysis of the PRNP gene. Results: The patient was diagnosed in life with VGKC-complex Ab–associated encephalitis based on strongly positive VGKC-complex Ab titers but no detectable LGI1 or CASPR2 Abs. He died despite 1 year of aggressive immunosuppressive treatment. The neuropathologic diagnosis was GSS disease, and a novel mutation, P84S, in the PRNP gene was found. Conclusion: VGKC-complex Abs are described in an increasingly broad range of clinical syndromes, including progressive encephalopathies, and may be amenable to treatment with immunosuppression. However, the failure to respond to aggressive immunotherapy warns against VGKC-complex Abs being pathogenic, and their presence does not preclude the possibility of prion disease. PMID:24814844
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.
Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin
2013-07-01
Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.
NASA Technical Reports Server (NTRS)
Gagliano, Larry; McLeod, Todd; Hovater, Mary A.
2017-01-01
Marshall performs research, integrates information, matures technologies, and enhances science to bring together a diverse portfolio of products and services of interest for Space Situational Awareness (SSA) and Space Asset Management (SAM), all of which can be accessed through partnerships with Marshall. Integrated Space Situational Awareness and Asset Management (ISSAAM) is an initiative of NASA's Marshall Space Flight Center to improve space situational awareness and space asset management through technical innovation, collaboration, and cooperation with U.S. Government agencies and the global space community. Marshall Space Flight Center provides solutions for complex issues with in-depth capabilities, a broad range of experience, and expertise unique in the world, and all available in one convenient location. NASA has longstanding guidelines that are used to assess space objects. Specifically, Marshall Space Flight Center has the capabilities, facilities and expertise to address the challenges that space objects, such as near-Earth objects (NEO) or Orbital Debris pose. ISSAAM's three pronged approach brings together vital information and in-depth tools working simultaneously toward examining the complex problems encountered in space situational awareness. Marshall's role in managing, understanding and planning includes many projects grouped under each prong area: Database/Analyses/Visualization; Detection/Tracking/ Mitigation/Removal. These are not limited to those listed below.
AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F
2015-01-01
Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Solar Power System Options for the Radiation and Technology Demonstration Spacecraft
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Haraburda, Francis M.; Riehl, John P.
2000-01-01
The Radiation and Technology Demonstration (RTD) Mission has the primary objective of demonstrating high-power (10 kilowatts) electric thruster technologies in Earth orbit. This paper discusses the conceptual design of the RTD spacecraft photovoltaic (PV) power system and mission performance analyses. These power system studies assessed multiple options for PV arrays, battery technologies and bus voltage levels. To quantify performance attributes of these power system options, a dedicated Fortran code was developed to predict power system performance and estimate system mass. The low-thrust mission trajectory was analyzed and important Earth orbital environments were modeled. Baseline power system design options are recommended on the basis of performance, mass and risk/complexity. Important findings from parametric studies are discussed and the resulting impacts to the spacecraft design and cost.
Faint Object Spectrograph (FOS) early performance
NASA Technical Reports Server (NTRS)
Harms, Richard; Fitch, John
1991-01-01
The on-orbit performance of the HST + FOS instrument is described and illustrated with examples of initial scientific results. The effects of the spherical aberration from the misfiguring of the HST primary mirror upon isolated point sources and in complex fields such as the nuclei of galaxies are analyzed. Possible means for eliminating the effects of spherical aberration are studied. Concepts include using image enhancement software to extract maximum spatial and spectral information from the existing data as well as several options to repair or compensate for the HST's optical performance. In particular, it may be possible to install corrective optics into the HST which will eliminate the spherical aberration for the FOS and some of the other instruments. The more promising ideas and calculations of the expected improvements in performance are briefly described.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
Does linear separability really matter? Complex visual search is explained by simple search
Vighneshvel, T.; Arun, S. P.
2013-01-01
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822
High altitude cognitive performance and COPD interaction
Kourtidou-Papadeli, C; Papadelis, C; Koutsonikolas, D; Boutzioukas, S; Styliadis, C; Guiba-Tziampiri, O
2008-01-01
Introduction: Thousands of people work and perform everyday in high altitude environment, either as pilots, or shift workers, or mountaineers. The problem is that most of the accidents in this environment have been attributed to human error. The objective of this study was to assess complex cognitive performance as it interacts with respiratory insufficiency at altitudes of 8000 feet and identify the potential effect of hypoxia on safe performance. Methods: Twenty subjects participated in the study, divided in two groups: Group I with mild asymptomatic chronic obstructive pulmonary disease (COPD), and Group II with normal respiratory function. Altitude was simulated at 8000 ft. using gas mixtures. Results: Individuals with mild COPD experienced notable hypoxemia with significant performance decrements and increased number of errors at cabin altitude, compared to normal subjects, whereas their blood pressure significantly increased. PMID:19048098
Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise
2016-01-01
A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Schwarzkopf, Dietrich S.; Bahrami, Bahador; Fleming, Stephen M.; Jackson, Ben M.; Goch, Tristam J. C.; Saygin, Ayse P.; Miller, Luke E.; Pappa, Katerina; Pavisic, Ivanna; Schade, Rachel N.; Noyce, Alastair J.; Crutch, Sebastian J.; O'Keeffe, Aidan G.; Schrag, Anette E.; Morris, Huw R.
2018-01-01
ABSTRACT Background: People with Parkinson's disease (PD) who develop visuo‐perceptual deficits are at higher risk of dementia, but we lack tests that detect subtle visuo‐perceptual deficits and can be performed by untrained personnel. Hallucinations are associated with cognitive impairment and typically involve perception of complex objects. Changes in object perception may therefore be a sensitive marker of visuo‐perceptual deficits in PD. Objective: We developed an online platform to test visuo‐perceptual function. We hypothesised that (1) visuo‐perceptual deficits in PD could be detected using online tests, (2) object perception would be preferentially affected, and (3) these deficits would be caused by changes in perception rather than response bias. Methods: We assessed 91 people with PD and 275 controls. Performance was compared using classical frequentist statistics. We then fitted a hierarchical Bayesian signal detection theory model to a subset of tasks. Results: People with PD were worse than controls at object recognition, showing no deficits in other visuo‐perceptual tests. Specifically, they were worse at identifying skewed images (P < .0001); at detecting hidden objects (P = .0039); at identifying objects in peripheral vision (P < .0001); and at detecting biological motion (P = .0065). In contrast, people with PD were not worse at mental rotation or subjective size perception. Using signal detection modelling, we found this effect was driven by change in perceptual sensitivity rather than response bias. Conclusions: Online tests can detect visuo‐perceptual deficits in people with PD, with object recognition particularly affected. Ultimately, visuo‐perceptual tests may be developed to identify at‐risk patients for clinical trials to slow PD dementia. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473691
The model of the optical-electronic control system of vehicles location at level crossing
NASA Astrophysics Data System (ADS)
Verezhinskaia, Ekaterina A.; Gorbachev, Aleksei A.; Maruev, Ivan A.; Shavrygina, Margarita A.
2016-04-01
Level crossing - one of the most dangerous sections of the road network, where railway line crosses motor road at the same level. The collision of trains with vehicles at a level crossing is a serious type of road traffic accidents. The purpose of this research is to develop complex optical electronic control system of vehicles location in the dangerous zone of level crossing. The system consists of registration blocks (including photodetector, lens, infrared emitting diode), determinant devices and camera installed within the boundaries of level crossing. The system performs detection of objects (vehicles) by analysing the time of the object movement opposite to the registration block and level of the reflected signal from the object. The paper presents theoretical description and experimental research of main principles of the system operation. Experimental research of the system model with selected optical-electronic components have confirmed the possibility of metal objects detection at the required distance (0.5 - 2 m) with different values of background illuminance.
Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294
Ren, Jingzheng; Liang, Hanwei; Dong, Liang; Sun, Lu; Gao, Zhiqiu
2016-08-15
Industrial symbiosis provides novel and practical pathway to the design for the sustainability. Decision support tool for its verification is necessary for practitioners and policy makers, while to date, quantitative research is limited. The objective of this work is to present an innovative approach for supporting decision-making in the design for the sustainability with the implementation of industrial symbiosis in chemical complex. Through incorporating the emergy theory, the model is formulated as a multi-objective approach that can optimize both the economic benefit and sustainable performance of the integrated industrial system. A set of emergy based evaluation index are designed. Multi-objective Particle Swarm Algorithm is proposed to solve the model, and the decision-makers are allowed to choose the suitable solutions form the Pareto solutions. An illustrative case has been studied by the proposed method, a few of compromises between high profitability and high sustainability can be obtained for the decision-makers/stakeholders to make decision. Copyright © 2016 Elsevier B.V. All rights reserved.
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
Learning what matters: A neural explanation for the sparsity bias.
Hassall, Cameron D; Connor, Patrick C; Trappenberg, Thomas P; McDonald, John J; Krigolson, Olave E
2018-05-01
The visual environment is filled with complex, multi-dimensional objects that vary in their value to an observer's current goals. When faced with multi-dimensional stimuli, humans may rely on biases to learn to select those objects that are most valuable to the task at hand. Here, we show that decision making in a complex task is guided by the sparsity bias: the focusing of attention on a subset of available features. Participants completed a gambling task in which they selected complex stimuli that varied randomly along three dimensions: shape, color, and texture. Each dimension comprised three features (e.g., color: red, green, yellow). Only one dimension was relevant in each block (e.g., color), and a randomly-chosen value ranking determined outcome probabilities (e.g., green > yellow > red). Participants were faster to respond to infrequent probe stimuli that appeared unexpectedly within stimuli that possessed a more valuable feature than to probes appearing within stimuli possessing a less valuable feature. Event-related brain potentials recorded during the task provided a neurophysiological explanation for sparsity as a learning-dependent increase in optimal attentional performance (as measured by the N2pc component of the human event-related potential) and a concomitant learning-dependent decrease in prediction errors (as measured by the feedback-elicited reward positivity). Together, our results suggest that the sparsity bias guides human reinforcement learning in complex environments. Copyright © 2018 Elsevier B.V. All rights reserved.
Sunwook, Kim; Nussbaum, Maury A.; Quandt, Sara A.; Laurienti, Paul J.; Arcury, Thomas A.
2015-01-01
Objective Assess potential chronic effects of pesticide exposure on postural control, by examining postural balance of farmworkers and non-farmworkers diverse self-reported lifetime exposures. Methods Balance was assessed during quiet upright stance under four experimental conditions (2 visual × 2 cognitive difficulty). Results Significant differences in baseline balance performance (eyes open without cognitive task) between occupational groups were apparent in postural sway complexity. When adding a cognitive task to the eyes open condition, the influence of lifetime exposure on complexity ratios appeared different between occupational groups. Removing visual information revealed a negative association of lifetime exposure with complexity ratios. Conclusions Farmworkers and non-farmworkers may use different postural control strategies even when controlling for the level of lifetime pesticide exposure. Long-term exposure can affect somatosensory/vestibular sensory systems and the central processing of sensory information for postural control. PMID:26849257
Phase reconstruction using compressive two-step parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith
2018-04-01
The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.
Cavitation, Flow Structure and Turbulence in the Tip Region of a Rotor Blade
NASA Technical Reports Server (NTRS)
Wu, H.; Miorini, R.; Soranna, F.; Katz, J.; Michael, T.; Jessup, S.
2010-01-01
Objectives: Measure the flow structure and turbulence within a Naval, axial waterjet pump. Create a database for benchmarking and validation of parallel computational efforts. Address flow and turbulence modeling issues that are unique to this complex environment. Measure and model flow phenomena affecting cavitation within the pump and its effect on pump performance. This presentation focuses on cavitation phenomena and associated flow structure in the tip region of a rotor blade.
Mission planning for autonomous systems
NASA Technical Reports Server (NTRS)
Pearson, G.
1987-01-01
Planning is a necessary task for intelligent, adaptive systems operating independently of human controllers. A mission planning system that performs task planning by decomposing a high-level mission objective into subtasks and synthesizing a plan for those tasks at varying levels of abstraction is discussed. Researchers use a blackboard architecture to partition the search space and direct the focus of attention of the planner. Using advanced planning techniques, they can control plan synthesis for the complex planning tasks involved in mission planning.
McDaniel, Joshua; Bass, Lynn; Pate, Toni; DeValve, Michael; Miller, Susan
2017-09-01
Background: National professional organizations have recognized pharmacists as essential members of the intensive care unit (ICU) team. Critical care pharmacists' clinical activities have been categorized as fundamental, desirable, and optimal, providing a structure for gauging ICU pharmacy services being provided. Objective: To determine the impact the addition of a second ICU pharmacist covering 30 adult ICU beds at a large regional medical center has on the complexity of pharmacists' interventions, the types of clinical activities performed by the pharmacists, and the ICU team members' satisfaction. Methods: A prospective mixed-method descriptive study was conducted. Pharmacists recorded their interventions and clinical activities performed. A focus group composed of randomly selected ICU team members was held to qualitatively describe the impact of the additional pharmacist coverage on patient care, team dynamics, and pharmacy services provided. Results: The baseline period consisted of 33 days, and the intervention period consisted of 20 days. The average complexity of interventions was 1.72 during the baseline period (mode = 2) versus 1.69 (mode = 2) during the intervention period. The number of desirable and optimal clinical activities performed daily increased during the intervention from 8.4 (n = 279) to 16.4 (n = 328) and 2.3 (n = 75) to 8.6 (n = 171) compared with the baseline, respectively. Focus group members qualitatively described additional pharmacist coverage as beneficial. Conclusion: The additional critical care pharmacist did not increase pharmacy intervention complexity; however, more interventions were performed per day. Additional pharmacist coverage increased the daily number of desirable and optimal clinical activities performed and positively impacted ICU team members' satisfaction.
Stojanoski, Bobby Boge; Niemeier, Matthias
2015-10-01
It is well known that visual expectation and attention modulate object perception. Yet, the mechanisms underlying these top-down influences are not completely understood. Event-related potentials (ERPs) indicate late contributions of expectations to object processing around the P2 or N2. This is true independent of whether people expect objects (vs. no objects) or specific shapes, hence when expectations pertain to complex visual features. However, object perception can also benefit from expecting colour information, which can facilitate figure/ground segregation. Studies on attention to colour show attention-sensitive modulations of the P1, but are limited to simple transient detection paradigms. The aim of the current study was to examine whether expecting simple features (colour information) during challenging object perception tasks produce early or late ERP modulations. We told participants to expect an object defined by predominantly black or white lines that were embedded in random arrays of distractor lines and then asked them to report the object's shape. Performance was better when colour expectations were met. ERPs revealed early and late phases of modulation. An early modulation at the P1/N1 transition arguably reflected earlier stages of object processing. Later modulations, at the P3, could be consistent with decisional processes. These results provide novel insights into feature-specific contributions of visual expectations to object perception.
Visual working memory for global, object, and part-based information.
Patterson, Michael D; Bly, Benjamin Martin; Porcelli, Anthony J; Rypma, Bart
2007-06-01
We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
Space Suit Performance: Methods for Changing the Quality of Quantitative Data
NASA Technical Reports Server (NTRS)
Cowley, Matthew; Benson, Elizabeth; Rajulu, Sudhakar
2014-01-01
NASA is currently designing a new space suit capable of working in deep space and on Mars. Designing a suit is very difficult and often requires trade-offs between performance, cost, mass, and system complexity. To verify that new suits will enable astronauts to perform to their maximum capacity, prototype suits must be built and tested with human subjects. However, engineers and flight surgeons often have difficulty understanding and applying traditional representations of human data without training. To overcome these challenges, NASA is developing modern simulation and analysis techniques that focus on 3D visualization. Early understanding of actual performance early on in the design cycle is extremely advantageous to increase performance capabilities, reduce the risk of injury, and reduce costs. The primary objective of this project was to test modern simulation and analysis techniques for evaluating the performance of a human operating in extra-vehicular space suits.
Characterization of Early Partial Seizure Onset: Frequency, Complexity and Entropy
Jouny, Christophe C.; Bergey, Gregory K.
2011-01-01
Objective A clear classification of partial seizures onset features is not yet established. Complexity and entropy have been very widely used to describe dynamical systems, but a systematic evaluation of these measures to characterize partial seizures has never been performed. Methods Eighteen different measures including power in frequency bands up to 300Hz, Gabor atom density (GAD), Higuchi fractal dimension (HFD), Lempel-Ziv complexity, Shannon entropy, sample entropy, and permutation entropy, were selected to test sensitivity to partial seizure onset. Intracranial recordings from forty-five patients with mesial temporal, neocortical temporal and neocortical extratemporal seizure foci were included (331 partial seizures). Results GAD, Lempel-Ziv complexity, HFD, high frequency activity, and sample entropy were the most reliable measures to assess early seizure onset. Conclusions Increases in complexity and occurrence of high-frequency components appear to be commonly associated with early stages of partial seizure evolution from all regions. The type of measure (frequency-based, complexity or entropy) does not predict the efficiency of the method to detect seizure onset. Significance Differences between measures such as GAD and HFD highlight the multimodal nature of partial seizure onsets. Improved methods for early seizure detection may be achieved from a better understanding of these underlying dynamics. PMID:21872526
NASA Astrophysics Data System (ADS)
Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi
2014-07-01
Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677
Cowley, Benjamin; Lukander, Kristian
2016-01-01
Background: Recognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions. Motivation: There have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory. Methods: We develop and test a novel protocol for global-local dissociation, with task structure including phases of divided (“rule search”) and selective (“rule found”) attention, based on the Wisconsin Card Sorting Task (WCST). We test it in a laboratory study with 25 participants, and report on behavior measures (physiological data was also gathered, but not reported here). We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity. Results: We report behavioral results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol. Contribution: We contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work. PMID:26941689
Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.
Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J
2013-01-01
Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.
High energy PIXE: A tool to characterize multi-layer thick samples
NASA Astrophysics Data System (ADS)
Subercaze, A.; Koumeir, C.; Métivier, V.; Servagent, N.; Guertin, A.; Haddad, F.
2018-02-01
High energy PIXE is a useful and non-destructive tool to characterize multi-layer thick samples such as cultural heritage objects. In a previous work, we demonstrated the possibility to perform quantitative analysis of simple multi-layer samples using high energy PIXE, without any assumption on their composition. In this work an in-depth study of the parameters involved in the method previously published is proposed. Its extension to more complex samples with a repeated layer is also presented. Experiments have been performed at the ARRONAX cyclotron using 68 MeV protons. The thicknesses and sequences of a multi-layer sample including two different layers of the same element have been determined. Performances and limits of this method are presented and discussed.
ERIC Educational Resources Information Center
Liesefeld, Heinrich René; Fu, Xiaolan; Zimmer, Hubert D.
2015-01-01
A major debate in the mental-rotation literature concerns the question of whether objects are represented holistically during rotation. Effects of object complexity on rotational speed are considered strong evidence against such holistic representations. In Experiment 1, such an effect of object complexity was markedly present. A closer look on…
Multi-scale seismic tomography of the Merapi-Merbabu volcanic complex, Indonesia
NASA Astrophysics Data System (ADS)
Mujid Abdullah, Nur; Valette, Bernard; Potin, Bertrand; Ramdhan, Mohamad
2017-04-01
Merapi-Merbabu volcanic complex is the most active volcano located on Java Island, Indonesia, where the Indian plate subducts beneath Eurasian plate. We present a preliminary study of a multi-scale seismic tomography of the substructures of the volcanic complex. The main objective of our study is to image the feeding paths of the volcanic complex at an intermediate scale by using the data from the dense network (about 5 km spacing) constituted by 53 stations of the French-Indonesian DOMERAPI experiment complemented by the data of the German-Indonesian MERAMEX project (134 stations) and of the Indonesia Tsunami Early Warning System (InaTEWS) located in the vicinity of the complex. The inversion was performed using the INSIGHT algorithm, which follows a non-linear least squares approach based on a stochastic description of data and model. In total, 1883 events and 41846 phases (26647 P and 15199 S) have been processed, and a two-scale approach was adopted. The model obtained at regional scale is consistent with the previous studies. We selected the most reliable regional model as a prior model for the local tomography performed with a variant of the INSIGHT code. The algorithm of this code is based on the fact that inverting differences of data when transporting the errors in probability is equivalent to inverting initial data while introducing specific correlation terms in the data covariance matrix. The local tomography provides images of the substructure of the volcanic complex with a sufficiently good resolution to allow identification of a probable magma chamber at about 20 km.
Reduction of Subjective and Objective System Complexity
NASA Technical Reports Server (NTRS)
Watson, Michael D.
2015-01-01
Occam's razor is often used in science to define the minimum criteria to establish a physical or philosophical idea or relationship. Albert Einstein is attributed the saying "everything should be made as simple as possible, but not simpler". These heuristic ideas are based on a belief that there is a minimum state or set of states for a given system or phenomena. In looking at system complexity, these heuristics point us to an idea that complexity can be reduced to a minimum. How then, do we approach a reduction in complexity? Complexity has been described as a subjective concept and an objective measure of a system. Subjective complexity is based on human cognitive comprehension of the functions and inter relationships of a system. Subjective complexity is defined by the ability to fully comprehend the system. Simplifying complexity, in a subjective sense, is thus gaining a deeper understanding of the system. As Apple's Jonathon Ive has stated," It's not just minimalism or the absence of clutter. It involves digging through the depth of complexity. To be truly simple, you have to go really deep". Simplicity is not the absence of complexity but a deeper understanding of complexity. Subjective complexity, based on this human comprehension, cannot then be discerned from the sociological concept of ignorance. The inability to comprehend a system can be either a lack of knowledge, an inability to understand the intricacies of a system, or both. Reduction in this sense is based purely on a cognitive ability to understand the system and no system then may be truly complex. From this view, education and experience seem to be the keys to reduction or eliminating complexity. Objective complexity, is the measure of the systems functions and interrelationships which exist independent of human comprehension. Jonathon Ive's statement does not say that complexity is removed, only that the complexity is understood. From this standpoint, reduction of complexity can be approached in finding the optimal or 'best balance' of the system functions and interrelationships. This is achievable following von Bertalanffy's approach of describing systems as a set of equations representing both the system functions and the system interrelationships. Reduction is found based on an objective function defining the system output given variations in the system inputs and the system operating environment. By minimizing the objective function with respect to these inputs and environments, a reduced system can be found. Thus, a reduction of the system complexity is feasible.
Dorożyński, Przemysław; Kulinowski, Piotr; Jamróz, Witold; Juszczyk, Ewelina
2014-12-30
The objectives of the work included: presentation of magnetic resonance imaging (MRI) and fractal analysis based approach to comparison of dosage forms of different composition, structure, and assessment of the influence of the compositional factors i.e., matrix type, excipients etc., on properties and performance of the dosage form during drug dissolution. The work presents the first attempt to compare MRI data obtained for tablet formulations of different composition and characterized by distinct differences in hydration and drug dissolution mechanisms. The main difficulty, in such a case stems from differences in hydration behavior and tablet's geometry i.e., swelling, cracking, capping etc. A novel approach to characterization of matrix systems i.e., quantification of changes of geometrical complexity of the matrix shape during drug dissolution has been developed. Using three chosen commercial modified release tablet formulations with diclofenac sodium we present the method of parameterization of their geometrical complexity on the base of fractal analysis. The main result of the study is the correlation between the hydrating tablet behavior and drug dissolution - the increase of geometrical complexity expressed as fractal dimension relates to the increased variability of drug dissolution results. Copyright © 2014 Elsevier B.V. All rights reserved.
Anderson, Jeffrey R; Barrett, Steven F
2009-01-01
Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.
SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems, due to the complex nature of the problems, the need for complex assessments, and complicated ...
Structure, thermodynamics, and solubility in tetromino fluids.
Barnes, Brian C; Siderius, Daniel W; Gelb, Lev D
2009-06-16
To better understand the self-assembly of small molecules and nanoparticles adsorbed at interfaces, we have performed extensive Monte Carlo simulations of a simple lattice model based on the seven hard "tetrominoes", connected shapes that occupy four lattice sites. The equations of state of the pure fluids and all of the binary mixtures are determined over a wide range of density, and a large selection of multicomponent mixtures are also studied at selected conditions. Calculations are performed in the grand canonical ensemble and are analogous to real systems in which molecules or nanoparticles reversibly adsorb to a surface or interface from a bulk reservoir. The model studied is athermal; objects in these simulations avoid overlap but otherwise do not interact. As a result, all of the behavior observed is entropically driven. The one-component fluids all exhibit marked self-ordering tendencies at higher densities, with quite complex structures formed in some cases. Significant clustering of objects with the same rotational state (orientation) is also observed in some of the pure fluids. In all of the binary mixtures, the two species are fully miscible at large scales, but exhibit strong species-specific clustering (segregation) at small scales. This behavior persists in multicomponent mixtures; even in seven-component mixtures of all the shapes there is significant association between objects of the same shape. To better understand these phenomena, we calculate the second virial coefficients of the tetrominoes and related quantities, extract thermodynamic volume of mixing data from the simulations of binary mixtures, and determine Henry's law solubilities for each shape in a variety of solvents. The overall picture obtained is one in which complementarity of both the shapes of individual objects and the characteristic structures of different fluids are important in determining the overall behavior of a fluid of a given composition, with sometimes counterintuitive results. Finally, we note that no sharp phase transitions are observed but that this appears to be due to the small size of the objects considered. It is likely that complex phase behavior may be found in systems of larger polyominoes.
RF tomography of metallic objects in free space: preliminary results
NASA Astrophysics Data System (ADS)
Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher
2015-05-01
RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.
High temperature plasma in beta Lyrae, observed from Copernicus
NASA Technical Reports Server (NTRS)
Kondo, Y.; Hack, M.; Hutchings, J. B.; Mccluskey, G. E., Jr.; Plavec, M.; Polidan, R. S.
1975-01-01
High-resolution UV spectrophotometry of the complex close binary system beta Lyrae was performed with a telescope spectrometer on board Copernicus. Observations were made at phases 0.0, 0.25, 0.5, and 0.75 with resolutions of 0.2 A (far-UV) and 0.4 A (mid-UV). The far-UV spectrum is completely dominated by emission lines indicating the existence of a high-temperature plasma in this binary. The spectrum of this object is unlike that of any other object observed from Copernicus. It is believed that this high-temperature plasma results from dynamic mass transfer taking place in the binary. The current results are compared with OAO-2 observations and other observational results. The possibility that the secondary component is a collapsed object is also discussed; the Copernicus observations are consistent with the hypothesis that the spectroscopically invisible secondary component is a black hole.
Optimising, generalising and integrating educational practice using neuroscience
NASA Astrophysics Data System (ADS)
Colvin, Robert
2016-07-01
Practical collaboration at the intersection of education and neuroscience research is difficult because the combined discipline encompasses both the activity of microscopic neurons and the complex social interactions of teachers and students in a classroom. Taking a pragmatic view, this paper discusses three education objectives to which neuroscience can be effectively applied: optimising, generalising and integrating instructional techniques. These objectives are characterised by: (1) being of practical importance; (2) building on existing education and cognitive research; and (3) being infeasible to address based on behavioural experiments alone. The focus of the neuroscientific aspect of collaborative research should be on the activity of the brain before, during and after learning a task, as opposed to performance of a task. The objectives are informed by literature that highlights possible pitfalls with educational neuroscience research, and are described with respect to the static and dynamic aspects of brain physiology that can be measured by current technology.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Multi-Objective Hybrid Optimal Control for Interplanetary Mission Planning
NASA Technical Reports Server (NTRS)
Englander, Jacob; Vavrina, Matthew; Ghosh, Alexander
2015-01-01
Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed and in some cases the final destination. In addition, a time-history of control variables must be chosen which defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very diserable. This work presents such as an approach by posing the mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on a hypothetical mission to the main asteroid belt.
Halas, Nancy J.; Nordlander, Peter; Neumann, Oara
2017-01-17
A system including a steam generation system and a chamber. The steam generation system includes a complex and the steam generation system is configured to receive water, concentrate electromagnetic (EM) radiation received from an EM radiation source, apply the EM radiation to the complex, where the complex absorbs the EM radiation to generate heat, and transform, using the heat generated by the complex, the water to steam. The chamber is configured to receive the steam and an object, wherein the object is of medical waste, medical equipment, fabric, and fecal matter.
Halas, Nancy J.; Nordlander, Peter; Neumann, Oara
2015-12-29
A system including a steam generation system and a chamber. The steam generation system includes a complex and the steam generation system is configured to receive water, concentrate electromagnetic (EM) radiation received from an EM radiation source, apply the EM radiation to the complex, where the complex absorbs the EM radiation to generate heat, and transform, using the heat generated by the complex, the water to steam. The chamber is configured to receive the steam and an object, wherein the object is of medical waste, medical equipment, fabric, and fecal matter.
Systemic estimation of the effect of photodynamic therapy of cancer
NASA Astrophysics Data System (ADS)
Kogan, Eugenia A.; Meerovich, Gennadii A.; Torshina, Nadezgda L.; Loschenov, Victor B.; Volkova, Anna I.; Posypanova, Anna M.
1997-12-01
The effects of photodynamic therapy (PDT) of cancer needs objective estimation and its unification in experimental as well as in clinical studies. They must include not only macroscopical changes but also the complex of following morphological criteria: (1) the level of direct tumor damage (direct necrosis and apoptosis); (2) the level of indirect tumor damage (ischemic necrosis); (3) the signs of vascular alterations; (4) the local and systemic antiblastome resistance; (5) the proliferative activity and malignant potential of survival tumor tissue. We have performed different regimes PDT using phthalocyanine derivatives. The complex of morphological methods (Ki-67, p53, c-myc, bcl-2) was used. Obtained results showed the connection of the tilted morphological criteria with tumor regression.
A statistical learning strategy for closed-loop control of fluid flows
NASA Astrophysics Data System (ADS)
Guéniat, Florimond; Mathelin, Lionel; Hussaini, M. Yousuff
2016-12-01
This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz'63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.
NASA Astrophysics Data System (ADS)
Yan, Beichuan; Regueiro, Richard A.
2018-02-01
A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.
Feature integration and object representations along the dorsal stream visual hierarchy
Perry, Carolyn Jeane; Fallah, Mazyar
2014-01-01
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147
Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim
2012-01-01
The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.
Sample Dimensionality Effects on d' and Proportion of Correct Responses in Discrimination Testing.
Bloom, David J; Lee, Soo-Yeun
2016-09-01
Products in the food and beverage industry have varying levels of dimensionality ranging from pure water to multicomponent food products, which can modify sensory perception and possibly influence discrimination testing results. The objectives of the study were to determine the impact of (1) sample dimensionality and (2) complex formulation changes on the d' and proportion of correct response of the 3-AFC and triangle methods. Two experiments were conducted using 47 prescreened subjects who performed either triangle or 3-AFC test procedures. In Experiment I, subjects performed 3-AFC and triangle tests using model solutions with different levels of dimensionality. Samples increased in dimensionality from 1-dimensional sucrose in water solution to 3-dimensional sucrose, citric acid, and flavor in water solution. In Experiment II, subjects performed 3-AFC and triangle tests using 3-dimensional solutions. Sample pairs differed in all 3 dimensions simultaneously to represent complex formulation changes. Two forms of complexity were compared: dilution, where all dimensions decreased in the same ratio, and compensation, where a dimension was increased to compensate for a reduction in another. The proportion of correct responses decreased for both methods when the dimensionality was increased from 1- to 2-dimensional samples. No reduction in correct responses was observed from 2- to 3-dimensional samples. No significant differences in d' were demonstrated between the 2 methods when samples with complex formulation changes were tested. Results reveal an impact on proportion of correct responses due to sample dimensionality and should be explored further using a wide range of sample formulations. © 2016 Institute of Food Technologists®
Statistical Field Estimation for Complex Coastal Regions and Archipelagos (PREPRINT)
2011-04-09
and study the computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal...computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal regions and... multiscale free-surface code builds on the primitive-equation model of the Harvard Ocean Predic- tion System (HOPS, Haley et al. (2009)). Additionally
Unrewarded Object Combinations in Captive Parrots
Auersperg, Alice Marie Isabel; Oswald, Natalie; Domanegg, Markus; Gajdon, Gyula Koppany; Bugnyar, Thomas
2015-01-01
In primates, complex object combinations during play are often regarded as precursors of functional behavior. Here we investigate combinatory behaviors during unrewarded object manipulation in seven parrot species, including kea, African grey parrots and Goffin cockatoos, three species previously used as model species for technical problem solving. We further examine a habitually tool using species, the black palm cockatoo. Moreover, we incorporate three neotropical species, the yellow- and the black-billed Amazon and the burrowing parakeet. Paralleling previous studies on primates and corvids, free object-object combinations and complex object-substrate combinations such as inserting objects into tubes/holes or stacking rings onto poles prevailed in the species previously linked to advanced physical cognition and tool use. In addition, free object-object combinations were intrinsically structured in Goffin cockatoos and in kea. PMID:25984564
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
Internal kinematic and physical properties in a BCD galaxy: Haro 15 in detail
NASA Astrophysics Data System (ADS)
Firpo, V.; Bosch, G.; Hägele, G. F.; Díaz, A. I.; Morrell, N.
2011-11-01
We present a detailed study of the kinematic and physical properties of the ionized gas in multiple knots of the blue compact dwarf galaxy Haro 15. Using echelle and long slit spectroscopy data, obtained with different instruments at Las Campanas Observatory, we study the internal kinematic and physical conditions (electron density and temperature), ionic and total chemical abundances of several atoms, reddening and ionization structure. Applying direct and empirical methods for abundance determination, we perform a comparative analysis between these regions and in their different components. On the other hand, our echelle spectra show complex kinematics in several conspicuous knots within the galaxy. To perform an in-depth 2D spectroscopic study we complete this work with high spatial and spectral resolution spectroscopy using the Integral Field Unit mode on the Gemini Multi-Object Spectrograph instrument at the Gemini South telescope. With these data we are able to resolve the complex kinematical structure within star forming knots in Haro 15 galaxy.
Trauma and emergency surgery: an evolutionary direction for trauma surgeons.
Scherer, Lynette A; Battistella, Felix D
2004-01-01
The success of nonoperative management of injuries has diminished the operative experience of trauma surgeons. To enhance operative experience, our trauma surgeons began caring for all general surgery emergencies. Our objective was to characterize and compare the experience of our trauma surgeons with that of our general surgeons. We reviewed records to determine case diversity, complexity, time of operation, need for intensive care unit care, and payor mix for patients treated by the trauma and emergency surgery (TES) surgeons and elective practice general surgery (ELEC) surgeons over a 1-year period. TES and ELEC surgeons performed 253 +/- 83 and 234 +/- 40 operations per surgeon, respectively (p = 0.59). TES surgeons admitted more patients and performed more after-hours operations than their ELEC colleagues. Both groups had a mix of cases that was diverse and complex. Combining the care of patients with trauma and general surgery emergencies resulted in a breadth and scope of practice for TES surgeons that compared well with that of ELEC surgeons.
Morganti, Pierfrancesco; Palombo, Paolo; Palombo, Marco; Fabrizi, Giuseppe; Cardillo, Antonio; Svolacchia, Fabiano; Guevara, Luis; Mezzana, Paolo
2012-01-01
Background The reduction of mortality worldwide has led older individuals to seek intervention modalities to improve their appearance and reverse signs of aging. Objective We formulated a medical device as innovative block-polymer nanoparticles based on phosphatidylcholine, hyaluronan, and chitin nanofibrils entrapping amino acids, vitamins, and melatonin. Methods Viability and collagen synthesis were controlled on fibroblasts ex vivo culture while adenosine triphosphate production was evaluated on keratinocytes culture. Subjective and objective evaluations were performed in vivo on selected volunteer patients. Results In accordance with our previous studies, both the in vitro and in vivo obtained results demonstrate the efficacy of the injected block-polymer nanoparticles in reducing skin wrinkling and ameliorating the signs of aging. PMID:23293530
Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.
Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C
2018-01-01
We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.
Soil mercury levels in the area surrounding the Cerro Prieto geothermal complex, MEXICO.
Pastrana-Corral, M A; Wakida, F T; García-Flores, E; Rodriguez-Mendivil, D D; Quiñonez-Plaza, A; Piñon-Colin, T D J
2016-08-01
Even though geothermal energy is a renewable energy source that is seen as cost-effective and environmentally friendly, emissions from geothermal plants can impact air, soil, and water in the vicinity of geothermal power plants. The Cerro Prieto geothermal complex is located 30 km southeast of the city of Mexicali in the Mexican state of Baja California. Its installed electricity generation capacity is 720 MW, being the largest geothermal complex in Mexico. The objective of this study was to evaluate whether the emissions generated by the geothermal complex have increased the soil mercury concentration in the surrounding areas. Fifty-four surface soil samples were collected from the perimeter up to an approximate distance of 7660 m from the complex. Additionally, four soil depth profiles were performed in the vicinity of the complex. Mercury concentration in 69 % of the samples was higher than the mercury concentration found at the baseline sites. The mercury concentration ranged from 0.01 to 0.26 mg/kg. Our results show that the activities of the geothermal complex have led to an accumulation of mercury in the soil of the surrounding area. More studies are needed to determine the risk to human health and the ecosystems in the study area.
Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection
Ren, Yudan
2018-01-01
Abstract We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection. PMID:29354682
Interactive collision detection for deformable models using streaming AABBs.
Zhang, Xinyu; Kim, Young J
2007-01-01
We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform over time, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30 approximately 100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE [4] and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm [2] and observed about two times improvement.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
NASA Astrophysics Data System (ADS)
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.
Reach on sound: a key to object permanence in visually impaired children.
Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto
2011-04-01
The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.
Effect of Simvastatin on Cognitive Functioning in Children With Neurofibromatosis Type 1
Krab, Lianne C.; de Goede-Bolder, Arja; Aarsen, Femke K.; Pluijm, Saskia M. F.; Bouman, Marlies J.; van der Geest, Jos N.; Lequin, Maarten; Catsman, Coriene E.; Arts, Willem Frans M.; Kushner, Steven A.; Silva, Alcino J.; de Zeeuw, Chris I.; Moll, Henriëtte A.; Elgersma, Ype
2009-01-01
Context Neurofibromatosis type 1 (NF1) is among the most common genetic disorders that cause learning disabilities. Recently, it was shown that statin-mediated inhibition of 3-hydroxy-3-methylglutaryl coenzyme A reductase restores the cognitive deficits in an NF1 mouse model. Objective To determine the effect of simvastatin on neuropsychological, neurophysiological, and neuroradiological outcome measures in children with NF1. Design, Setting, and Participants Sixty-two of 114 eligible children (54%) with NF1 participated in a randomized, double-blind, placebo-controlled trial conducted between January 20, 2006, and February 8, 2007, at an NF1 referral center at a Dutch university hospital. Intervention Simvastatin or placebo treatment once daily for 12 weeks. Main Outcome Measures Primary outcomes were scores on a Rey complex figure test (delayed recall), cancellation test (speed), prism adaptation, and the mean brain apparent diffusion coefficient based on magnetic resonance imaging. Secondary outcome measures were scores on the cancellation test (standard deviation), Stroop color word test, block design, object assembly, Rey complex figure test (copy), Beery developmental test of visual-motor integration, and judgment of line orientation. Scores were corrected for baseline performance, age, and sex. Results No significant differences were observed between the simvastatin and placebo groups on any primary outcome measure: Rey complex figure test (β=0.10; 95% confidence interval [CI], −0.36 to 0.56); cancellation test (β=−0.19; 95% CI, −0.67 to 0.29); prism adaptation (odds ratio=2.0; 95% CI, 0.55 to 7.37); and mean brain apparent diffusion coefficient (β=0.06; 95% CI, −0.07 to 0.20). In the secondary outcome measures, we found a significant improvement in the simvastatin group in object assembly scores (β=0.54; 95% CI, 0.08 to 1.01), which was specifically observed in children with poor baseline performance (β =0.80; 95% CI, 0.29 to 1.30). Other secondary outcome measures revealed no significant effect of simvastatin treatment. Conclusion In this 12-week trial, simvastatin did not improve cognitive function in children with NF1. PMID:18632543
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
2012-01-01
Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure.
Mickelson, Robin S; Unertl, Kim M; Holden, Richard J
2016-10-12
Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. We identified 5 high-level macrocognitive processes affecting medication management-sensemaking, planning, coordination, monitoring, and decision making-and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
A Survey of Complex Object Technologies for Digital Libraries
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Argue, Brad; Efron, Miles; Denn, Sheila; Pattuelli, Maria Cristina
2001-01-01
Many early web-based digital libraries (DLs) had implicit assumptions reflected in their architecture that the unit of focus in the DL (frequently "reports" or "e-prints") would only be manifested in a single, or at most a few, common file formats such as PDF or PostScript. DLs have now matured to the point where their contents are commonly no longer simple files. Complex objects in DLs have emerged from in response to various requirements, including: simple aggregation of formats and supporting files, bundling additional information to aid digital preservation, creating opaque digital objects for e-commerce applications, and the incorporation of dynamic services with the traditional data files. We examine a representative (but not necessarily exhaustive) number of current and recent historical web-based complex object technologies and projects that are applicable to DLs: Aurora, Buckets, ComMentor, Cryptolopes, Digibox, Document Management Alliance, FEDORA, Kahn-Wilensky Framework Digital Objects, Metadata Encoding & Transmission Standard, Multivalent Documents, Open eBooks, VERS Encapsulated Objects, and the Warwick Framework.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Visual memory performance for color depends on spatiotemporal context.
Olivers, Christian N L; Schreij, Daniel
2014-10-01
Performance on visual short-term memory for features has been known to depend on stimulus complexity, spatial layout, and feature context. However, with few exceptions, memory capacity has been measured for abruptly appearing, single-instance displays. In everyday life, objects often have a spatiotemporal history as they or the observer move around. In three experiments, we investigated the effect of spatiotemporal history on explicit memory for color. Observers saw a memory display emerge from behind a wall, after which it disappeared again. The test display then emerged from either the same side as the memory display or the opposite side. In the first two experiments, memory improved for intermediate set sizes when the test display emerged in the same way as the memory display. A third experiment then showed that the benefit was tied to the original motion trajectory and not to the display object per se. The results indicate that memory for color is embedded in a richer episodic context that includes the spatiotemporal history of the display.
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Soft shape-adaptive gripping device made from artificial muscle
NASA Astrophysics Data System (ADS)
Hamburg, E.; Vunder, V.; Johanson, U.; Kaasik, F.; Aabloo, A.
2016-04-01
We report on a multifunctional four-finger gripper for soft robotics, suitable for performing delicate manipulation tasks. The gripping device is comprised of separately driven gripping and lifting mechanisms, both made from a separate single piece of smart material - ionic capacitive laminate (ICL) also known as artificial muscle. Compared to other similar devices the relatively high force output of the ICL material allows one to construct a device able to grab and lift objects exceeding multiple times its own weight. Due to flexible design of ICL grips, the device is able to adapt the complex shapes of different objects and allows grasping single or multiple objects simultaneously without damage. The performance of the gripper is evaluated in two different configurations: a) the ultimate grasping strength of the gripping hand; and b) the maximum lifting force of the lifting actuator. The ICL is composed of three main layers: a porous membrane consisting of non-ionic polymer poly(vinylidene fluoride-co-hexafluoropropene) (PVdF-HFP), ionic liquid 1-ethyl-3-methylimidazolium trifluoromethane-sulfonate (EMITFS), and a reinforcing layer of woven fiberglass cloth. Both sides of the membrane are coated with a carbonaceous electrode. The electrodes are additionally covered with thin gold layers, serving as current collectors. Device made of this material operates silently, requires low driving voltage (<3 V), and is suitable for performing tasks in open air environment.
Novakovic-Agopian, Tatjana; Kornblith, Erica S; Abrams, Gary; Burciaga-Rosales, Joaquin; Loya, Fred; D'Esposito, Mark; Chen, Anthony J-W
2018-05-02
Deficits in executive control functions are some of the most common and disabling consequences of both military and civilian brain injury. However, effective interventions are scant. The goal of this study was to assess whether cognitive rehabilitation training that was successfully applied in chronic civilian brain injury would be effective for military Veterans with TBI. In a prior study, participants with chronic acquired brain injury significantly improved after training in goal-oriented attentional self-regulation (GOALS) on measures of attention/executive function, functional task performance, and goal-directed control over neural processing on fMRI. The objective of this study was to assess effects of GOALS training in Veterans with chronic TBI. 33 Veterans with chronic TBI and executive difficulties in their daily life completed either five weeks of manualized Goal-Oriented Attentional Self-Regulation (GOALS) training or Brain-Health Education (BHE) matched in time and intensity. Evaluator-blinded assessments at baseline and post training included neuropsychological and complex functional task performance and self-report measures of emotional regulation. After GOALS, but not BHE training, participants significantly improved from baseline on primary outcome measures of: Overall Complex Attention/Executive Function composite neuropsychological performance score [F = 7.10, p =.01; partial 2 = .19], and on overall complex functional task performance (Goal Processing Scale Overall Performance) [F=6.92, p=.01, partial 2 =.20]. Additionally, post-GOALS participants indicated significant improvement on emotional regulation self-report measures [POMS Confusion Score F=6.05, p=.02, partial2=.20]. Training in attentional self-regulation applied to participant defined goals may improve cognitive functioning in Veterans with chronic TBI. Attention regulation training may not only impact executive control functioning in real world complex tasks, but may also improve emotional regulation and functioning. Implications for treatment of Veterans with TBI are discussed.
Computational State Space Models for Activity and Intention Recognition. A Feasibility Study
Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas
2014-01-01
Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138
Hardman, Kyle; Cowan, Nelson
2014-01-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739
Reducing the complexity of the software design process with object-oriented design
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Designing software is a complex process. How object-oriented design (OOD), coupled with formalized documentation and tailored object diagraming techniques, can reduce the complexity of the software design process is described and illustrated. The described OOD methodology uses a hierarchical decomposition approach in which parent objects are decomposed into layers of lower level child objects. A method of tracking the assignment of requirements to design components is also included. Increases in the reusability, portability, and maintainability of the resulting products are also discussed. This method was built on a combination of existing technology, teaching experience, consulting experience, and feedback from design method users. The discussed concepts are applicable to hierarchal OOD processes in general. Emphasis is placed on improving the design process by documenting the details of the procedures involved and incorporating improvements into those procedures as they are developed.
SSME Electrical Harness and Cable Development and Evolution
NASA Technical Reports Server (NTRS)
Abrams, Russ; Heflin, Johnny; Burns, Bob; Camper, Scott J.; Hill, Arthur J.
2010-01-01
The Space Shuttle Main Engine (SSME) electrical harness and cable system consists of the various interconnecting devices necessary for operation of complex rocket engine functions. Thirty seven harnesses incorporate unique connectors, backshell adapters, conductors, insulation, shielding, and physical barriers for a long maintenance-free life while providing the means to satisfy performance requirements and to mitigate adverse environmental influences. The objective of this paper is to provide a description of the SSME electrical harness and cable designs as well as the development history and lessons learned.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Apollo-Soyuz US-USSR joint mission results
NASA Technical Reports Server (NTRS)
Bean, A. L.; Evans, R. E.
1975-01-01
The technical and nontechnical objectives of the Apollo-Soyuz mission are briefly considered. The mission demonstrated that Americans and Russians can work together to perform a very complex operation, including rendezvous in space, docking, and the conduction of joint experiments. Certain difficulties which had to be overcome were partly related to differences concerning the role of the astronaut in the basic alignment and docking procedures for space vehicles. Attention is also given to the experiments conducted during the mission and the approach used to overcome the language barrier.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
1990-02-07
performance assessment, human intervention, or operator training. Algorithms on different levels are allowed to deal with the world with different degrees...have on the decisions made by the driver are a complex combination of human factors, driving experience, mission objectives, tactics, etc., and...motion. The distinction here is that the decision making program may I 12 1 I not necessarily make its decisions based on the same factors as the human
Cyber integrated MEMS microhand for biological applications
NASA Astrophysics Data System (ADS)
Weissman, Adam; Frazier, Athena; Pepen, Michael; Lu, Yen-Wen; Yang, Shanchieh Jay
2009-05-01
Anthropomorphous robotic hands at microscales have been developed to receive information and perform tasks for biological applications. To emulate a human hand's dexterity, the microhand requires a master-slave interface with a wearable controller, force sensors, and perception displays for tele-manipulation. Recognizing the constraints and complexity imposed in developing feedback interface during miniaturization, this project address the need by creating an integrated cyber environment incorporating sensors with a microhand, haptic/visual display, and object model, to emulates human hands' psychophysical perception at microscale.
1987-09-21
objectives of our program are to isolate and characterize a fully active DNA dependent RNA polymerase from the extremely halophilic archaebacteria of the genus...operons in II. Marismortui. The halobacteriaceae are extreme halophiles . They require 3.5 M NaCI for optimal growth an(l no growth is observed below 2...was difficutlt to perform due to the extreme genetic instability in this strain (6). In contrast, the genoine of the extreme halophilic and prototrophic
2007-06-07
100 kW/m2 for 0.1 s. Along with the material change, an oil leak problem required a geometric change. Initially, we considered TIG welding or...shear and moment, is addressed through the design, development, and testing of the CF1 and CF2 gages. Chapter 3 presents the evolutionary process ...a shock. Chapter 4 examines the performance of each gage to the nominal load conditions. Through this process , objective 2 is met. The best
Neurotoxic lesions of ventrolateral prefrontal cortex impair object-in-place scene memory
Wilson, Charles R E; Gaffan, David; Mitchell, Anna S; Baxter, Mark G
2007-01-01
Disconnection of the frontal lobe from the inferotemporal cortex produces deficits in a number of cognitive tasks that require the application of memory-dependent rules to visual stimuli. The specific regions of frontal cortex that interact with the temporal lobe in performance of these tasks remain undefined. One capacity that is impaired by frontal–temporal disconnection is rapid learning of new object-in-place scene problems, in which visual discriminations between two small typographic characters are learned in the context of different visually complex scenes. In the present study, we examined whether neurotoxic lesions of ventrolateral prefrontal cortex in one hemisphere, combined with ablation of inferior temporal cortex in the contralateral hemisphere, would impair learning of new object-in-place scene problems. Male macaque monkeys learned 10 or 20 new object-in-place problems in each daily test session. Unilateral neurotoxic lesions of ventrolateral prefrontal cortex produced by multiple injections of a mixture of ibotenate and N-methyl-d-aspartate did not affect performance. However, when disconnection from inferotemporal cortex was completed by ablating this region contralateral to the neurotoxic prefrontal lesion, new learning was substantially impaired. Sham disconnection (injecting saline instead of neurotoxin contralateral to the inferotemporal lesion) did not affect performance. These findings support two conclusions: first, that the ventrolateral prefrontal cortex is a critical area within the frontal lobe for scene memory; and second, the effects of ablations of prefrontal cortex can be confidently attributed to the loss of cell bodies within the prefrontal cortex rather than to interruption of fibres of passage through the lesioned area. PMID:17445247
NASA Astrophysics Data System (ADS)
Bayo, A.; Rodrigo, C.; Barrado, D.; Allard, F.
One of the very first steps astronomers working in stellar physics perform to advance in their studies, is to determine the most common/relevant physical parameters of the objects of study (effective temperature, bolometric luminosity, surface gravity, etc.). Different methodologies exist depending on the nature of the data, intrinsic properties of the objects, etc. One common approach is to compare the observational data with theoretical models passed through some simulator that will leave in the synthetic data the same imprint than the observational data carries, and see what set of parameters reproduce the observations best. Even in this case, depending on the kind of data the astronomer has, the methodology changes slightly. After parameters are published, the community tend to quote, praise and criticize them, sometimes paying little attention on whether the possible discrepancies come from the theoretical models, the data themselves or just the methodology used in the analysis. In this work we perform the simple, yet interesting, exercise of comparing the effective temperatures obtained via SED and more detailed spectral fittings (to the same grid of models), of a sample of well known and characterized young M-type objects members to different star forming regions and show how differences in temperature of up to 350 K can be expected just from the difference in methodology/data used. On the other hand we show how these differences are smaller for colder objects even when the complexity of the fit increases like for example introducing differential extinction. To perform this exercise we benefit greatly from the framework offered by the Virtual Observaotry.
A dissipative particle dynamics method for arbitrarily complex geometries
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em
2018-02-01
Dissipative particle dynamics (DPD) is an effective Lagrangian method for modeling complex fluids in the mesoscale regime but so far it has been limited to relatively simple geometries. Here, we formulate a local detection method for DPD involving arbitrarily shaped geometric three-dimensional domains. By introducing an indicator variable of boundary volume fraction (BVF) for each fluid particle, the boundary of arbitrary-shape objects is detected on-the-fly for the moving fluid particles using only the local particle configuration. Therefore, this approach eliminates the need of an analytical description of the boundary and geometry of objects in DPD simulations and makes it possible to load the geometry of a system directly from experimental images or computer-aided designs/drawings. More specifically, the BVF of a fluid particle is defined by the weighted summation over its neighboring particles within a cutoff distance. Wall penetration is inferred from the value of the BVF and prevented by a predictor-corrector algorithm. The no-slip boundary condition is achieved by employing effective dissipative coefficients for liquid-solid interactions. Quantitative evaluations of the new method are performed for the plane Poiseuille flow, the plane Couette flow and the Wannier flow in a cylindrical domain and compared with their corresponding analytical solutions and (high-order) spectral element solution of the Navier-Stokes equations. We verify that the proposed method yields correct no-slip boundary conditions for velocity and generates negligible fluctuations of density and temperature in the vicinity of the wall surface. Moreover, we construct a very complex 3D geometry - the "Brown Pacman" microfluidic device - to explicitly demonstrate how to construct a DPD system with complex geometry directly from loading a graphical image. Subsequently, we simulate the flow of a surfactant solution through this complex microfluidic device using the new method. Its effectiveness is demonstrated by examining the rich dynamics of surfactant micelles, which are flowing around multiple small cylinders and stenotic regions in the microfluidic device without wall penetration. In addition to stationary arbitrary-shape objects, the new method is particularly useful for problems involving moving and deformable boundaries, because it only uses local information of neighboring particles and satisfies the desired boundary conditions on-the-fly.
NASA Astrophysics Data System (ADS)
Rodriguez Gonzalez, Beatriz
2008-04-01
Much of the homotopical and homological structure of the categories of chain complexes and topological spaces can be deduced from the existence and properties of the 'simple' functors Tot : {double chain complexes} -> {chain complexes} and geometric realization : {sSets} -> {Top}, or similarly, Tot : {simplicial chain complexes} -> {chain complexes} and | | : {sTop} -> {Top}. The purpose of this thesis is to abstract this situation, and to this end we introduce the notion of '(co)simplicial descent category'. It is inspired by Guillen-Navarros's '(cubical) descent categories'. The key ingredients in a (co)simplicial descent category D are a class E of morphisms in D, called equivalences, and a 'simple' functor s : {(co)simplicial objects in D} -> D. They must satisfy axioms like 'Eilenberg-Zilber', 'exactness' and 'acyclicity'. This notion covers a wide class of examples, as chain complexes, sSets, topological spaces, filtered cochain complexes (where E = filtered quasi-isomorphisms or E = E_2-isomorphisms), commutative differential graded algebras (with s = Navarro's Thom-Whitney simple), DG-modules over a DG-category and mixed Hodge complexes, where s = Deligne's simple. From the simplicial descent structure we obtain homotopical structure on D, as cone and cylinder objects. We use them to i) explicitly describe the morphisms of HoD=D[E^{-1}] similarly to the case of calculus of fractions; ii) endow HoD with a non-additive pre-triangulated structure, that becomes triangulated in the stable additive case. These results use the properties of a 'total functor', which associates to any biaugmented bisimplicial object a simplicial object. It is the simplicial analogue of the total chain complex of a double complex, and it is left adjoint to Illusie's 'decalage' functor.
A probabilistic framework for identifying biosignatures using Pathway Complexity
NASA Astrophysics Data System (ADS)
Marshall, Stuart M.; Murray, Alastair R. G.; Cronin, Leroy
2017-11-01
One thing that discriminates living things from inanimate matter is their ability to generate similarly complex or non-random structures in a large abundance. From DNA sequences to folded protein structures, living cells, microbial communities and multicellular structures, the material configurations in biology can easily be distinguished from non-living material assemblies. Many complex artefacts, from ordinary bioproducts to human tools, though they are not living things, are ultimately produced by biological processes-whether those processes occur at the scale of cells or societies, they are the consequences of living systems. While these objects are not living, they cannot randomly form, as they are the product of a biological organism and hence are either technological or cultural biosignatures. A generalized approach that aims to evaluate complex objects as possible biosignatures could be useful to explore the cosmos for new life forms. However, it is not obvious how it might be possible to create such a self-contained approach. This would require us to prove rigorously that a given artefact is too complex to have formed by chance. In this paper, we present a new type of complexity measure, which we call `Pathway Complexity', that allows us not only to threshold the abiotic-biotic divide, but also to demonstrate a probabilistic approach based on object abundance and complexity which can be used to unambiguously assign complex objects as biosignatures. We hope that this approach will not only open up the search for biosignatures beyond the Earth, but also allow us to explore the Earth for new types of biology, and to determine when a complex chemical system discovered in the laboratory could be considered alive. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Predictability, Force and (Anti-)Resonance in Complex Object Control.
Maurice, Pauline; Hogan, Neville; Sternad, Dagmar
2018-04-18
Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.
Langton, Julia M.; Wong, Sabrina T.; Johnston, Sharon; Abelson, Julia; Ammi, Mehdi; Burge, Fred; Campbell, John; Haggerty, Jeannie; Hogg, William; Wodchis, Walter P.
2016-01-01
Objective: Primary care services form the foundation of modern healthcare systems, yet the breadth and complexity of services and diversity of patient populations may present challenges for creating comprehensive primary care information systems. Our objective is to develop regional-level information on the performance of primary care in Canada. Methods: A scoping review was conducted to identify existing initiatives in primary care performance measurement and reporting across 11 countries. The results of this review were used by our international team of primary care researchers and clinicians to propose an approach for regional-level primary care reporting. Results: We found a gap between conceptual primary care performance measurement frameworks in the peer-reviewed literature and real-world primary care performance measurement and reporting activities. We did not find a conceptual framework or analytic approach that could readily form the foundation of a regional-level primary care information system. Therefore, we propose an approach to reporting comprehensive and actionable performance information according to widely accepted core domains of primary care as well as different patient population groups. Conclusions: An approach that bridges the gap between conceptual frameworks and real-world performance measurement and reporting initiatives could address some of the potential pitfalls of existing ways of presenting performance information (i.e., by single diseases or by age). This approach could produce meaningful and actionable information on the quality of primary care services. PMID:28032823
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Toward Microsatellite Based Space Situational Awareness
NASA Astrophysics Data System (ADS)
Scott, L.; Wallace, B.; Sale, M.; Thorsteinson, S.
2013-09-01
The NEOSSat microsatellite is a dual mission space telescope which will perform asteroid detection and Space Situational Awareness (SSA) observation experiments on deep space, earth orbiting objects. NEOSSat was launched on 25 February 2013 into a 800 dawn-dusk sun synchronous orbit and is currently undergoing satellite commissioning. The microsatellite consists of a small aperture optical telescope, GPS receiver, high performance attitude control system, and stray light rejection baffle designed to reject stray light from the Sun while searching for asteroids with elongations 45 degrees along the ecliptic. The SSA experimental mission, referred to as HEOSS (High Earth Orbit Space Surveillance), will focus on objects in deep space orbits. The HEOSS mission objective is to evaluate the utility of microsatellites to perform catalog maintenance observations of resident space objects in a manner consistent with the needs of the Canadian Forces. The advantages of placing a space surveillance sensor in low Earth orbit are that the observer can conduct observations without the day-night interruption cycle experienced by ground based telescopes, the telescope is insensitive to adverse weather and the system has visibility to deep space resident space objects which are not normally visible from ground based sensors. Also, from a photometric standpoint, the microsatellite is able to conduct observations on objects with a rapidly changing observer position. The possibility of spin axis estimation on geostationary satellites may be possible and an experiment characterize spin axis of distant resident space objects is being planned. Also, HEOSS offers the ability to conduct observations of satellites at high phase angles which can potentially extend the trackable portion of space in which deep space objects' orbits can be monitored. In this paper we describe the HEOSS SSA experimental data processing system and the preliminary findings of the catalog maintenance experiments. The placement of a space based space surveillance sensor in low Earth orbit introduces tasking and image processing complexities such as cosmic ray rejection, scattered light from Earth's limb and unique scheduling limitations due to the observer's rapid positional change and we describe first-look microsatellite space surveillance lessons from this unique orbital vantage point..
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Perception of 3D spatial relations for 3D displays
NASA Astrophysics Data System (ADS)
Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.
2004-05-01
We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Andrey V.; Laurent, Guillaume J.; Clévy, Cédric; Tamadazte, Brahim; Lutz, Philippe
2015-10-01
Microassembly is an innovative alternative to the microfabrication process of MOEMS, which is quite complex. It usually implies the use of microrobots controlled by an operator. The reliability of this approach has been already confirmed for micro-optical technologies. However, the characterization of assemblies has shown that the operator is the main source of inaccuracies in the teleoperated microassembly. Therefore, there is great interest in automating the microassembly process. One of the constraints of automation in microscale is the lack of high precision sensors capable to provide the full information about the object position. Thus, the usage of visual-based feedback represents a very promising approach allowing to automate the microassembly process. The purpose of this article is to characterize the techniques of object position estimation based on the visual data, i.e., visual tracking techniques from the ViSP library. These algorithms enables a 3-D object pose using a single view of the scene and the CAD model of the object. The performance of three main types of model-based trackers is analyzed and quantified: edge-based, texture-based and hybrid tracker. The problems of visual tracking in microscale are discussed. The control of the micromanipulation station used in the framework of our project is performed using a new Simulink block set. Experimental results are shown and demonstrate the possibility to obtain the repeatability below 1 µm.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
Medicaid's Complex Goals: Challenges for Managed Care and Behavioral Health
Gold, Marsha; Mittler, Jessica
2000-01-01
The Medicaid program has become increasingly complex as policymakers use it to address various policy objectives, leading to structural tensions that surface with Medicaid managed care. In this article, we illustrate this complexity by focusing on the experience of three States with behavioral health carveouts—Maryland, Oregon, and Tennessee. Converting to Medicaid managed care forces policymakers to confront Medicaid's competing policy objectives, multiplicity of stakeholders, and diverse patients, many with complex needs. Emerging Medicaid managed care systems typically represent compromises in which existing inequities and fragmentation are reconfigured rather than eliminated. PMID:12500322
Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration
NASA Astrophysics Data System (ADS)
Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut
2017-04-01
Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
Lakatos, Gabriella; Gácsi, Márta; Topál, József; Miklósi, Adám
2012-03-01
The aim of the present investigation was to study the visual communication between humans and dogs in relatively complex situations. In the present research, we have modelled more lifelike situations in contrast to previous studies which often relied on using only two potential hiding locations and direct association between the communicative signal and the signalled object. In Study 1, we have provided the dogs with four potential hiding locations, two on each side of the experimenter to see whether dogs are able to choose the correct location based on the pointing gesture. In Study 2, dogs had to rely on a sequence of pointing gestures displayed by two different experimenters. We have investigated whether dogs are able to recognise an 'indirect signal', that is, a pointing toward a pointer. In Study 3, we have examined whether dogs can understand indirect information about a hidden object and direct the owner to the particular location. Study 1 has revealed that dogs are unlikely to rely on extrapolating precise linear vectors along the pointing arm when relying on human pointing gestures. Instead, they rely on a simple rule of following the side of the human gesturing. If there were more targets on the same side of the human, they showed a preference for the targets closer to the human. Study 2 has shown that dogs are able to rely on indirect pointing gestures but the individual performances suggest that this skill may be restricted to a certain level of complexity. In Study 3, we have found that dogs are able to localise the hidden object by utilising indirect human signals, and they are able to convey this information to their owner.
Engineering the object-relation database model in O-Raid
NASA Technical Reports Server (NTRS)
Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat
1989-01-01
Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.
Preparation and Biological Study of 68Ga-DOTA-alendronate
Fakhari, Ashraf; Jalilian, Amir R.; Johari-Daha, Fariba; Shafiee-Ardestani, Mehdi; Khalaj, Ali
2016-01-01
Objective(s): In line with previous research on the development of conjugated bisphosphonate ligands as new bone-avid agents, in this study, DOTA-conjugated alendronate (DOTA-ALN) was synthesized and evaluated after labeling with gallium-68 (68Ga). Methods: DOTA-ALN was synthesized and characterized, followed by 68Ga-DOTA-ALN preparation, using DOTA-ALN and 68GaCl3 (pH: 4-5) at 92-95° C for 10 min. Stability tests, hydroxyapatite assay, partition coefficient calculation, biodistribution studies, and imaging were performed on the developed agent in normal rats. Results: The complex was prepared with high radiochemical purity (>99% as depicted by radio thin-layer chromatography; specific activity: 310-320 GBq/mmol) after solid phase purification and was stabilized for up to 90 min with a log P value of -2.91. Maximum ligand binding (65%) was observed in the presence of 50 mg of hydroxyapatite; a major portion of the activity was excreted through the kidneys. With the exception of excretory organs, gastrointestinal tract organs, including the liver, intestine, and colon, showed significant uptake; however, the bone uptake was low (<1%) at 30 min after the injection. The data were also confirmed by sequential imaging at 30-90 min following the intravenous injection. Conclusion: The high solubility and anionic properties of the complex led to major renal excretion and low hydroxyapatite uptake; therefore, the complex failed to demonstrate bone imaging behaviors. PMID:27408898
Rand, Miya Kato; Lemay, Martin; Squire, Linda M; Shimansky, Yury P; Stelmach, George E
2010-03-01
The present project was aimed at investigating how two distinct and important difficulties (coordination difficulty and pronounced dependency on visual feedback) in Parkinson's disease (PD) affect each other for the coordination between hand transport toward an object and the initiation of finger closure during reach-to-grasp movement. Subjects with PD and age-matched healthy subjects made reach-to-grasp movements to a dowel under conditions in which the target object and/or the hand were either visible or not visible. The involvement of the trunk in task performance was manipulated by positioning the target object within or beyond the participant's outstretched arm to evaluate the effects of increasing the complexity of intersegmental coordination under different conditions related to the availability of visual feedback in subjects with PD. General kinematic characteristics of the reach-to-grasp movements of the subjects with PD were altered substantially by the removal of target object visibility. Compared with the controls, the subjects with PD considerably lengthened transport time, especially during the aperture closure period, and decreased peak velocity of wrist and trunk movement without target object visibility. Most of these differences were accentuated when the trunk was involved. In contrast, these kinematic parameters did not change depending on the visibility of the hand for both groups. The transport-aperture coordination was assessed in terms of the control law according to which the initiation of aperture closure during the reach occurred when the hand distance-to-target crossed a hand-target distance threshold for grasp initiation that is a function of peak aperture, hand velocity and acceleration, trunk velocity and acceleration, and trunk-target distance at the time of aperture closure initiation. When the hand or the target object was not visible, both groups increased the hand-target distance threshold for grasp initiation compared to its value under full visibility, implying an increase in the hand-target distance-related safety margin for grasping. The increase in the safety margin due to the absence of target object vision or the absence of hand vision was accentuated in the subjects with PD compared to that in the controls. The pronounced increase in the safety margin due to absence of target object vision for the subjects with PD was further accentuated when the trunk was involved compared to when it was not involved. The results imply that individuals with PD have significant limitations regarding neural computations required for efficient utilization of internal representations of target object location and hand motion as well as proprioceptive information about the hand to compensate for the lack of visual information during the performance of complex multisegment movements.
Ultra Rapid Object Categorization: Effects of Level, Animacy and Context
Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred
2013-01-01
It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results. PMID:23840810
Ultra rapid object categorization: effects of level, animacy and context.
Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred
2013-01-01
It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results.
Functional brain imaging of a complex navigation task following one night of total sleep deprivation
NASA Technical Reports Server (NTRS)
Strangman, Gary; Thompson, John H.; Strauss, Monica M.; Marshburn, Thomas H.; Sutton, Jeffrey P.
2006-01-01
Study Objectives: To assess the cerebral effects associated with sleep deprivation in a simulation of a complex, real-world, high-risk task. Design and Interventions: A two-week, repeated measures, cross-over experimental protocol, with counterbalanced orders of normal sleep (NS) and total sleep deprivation (TSD). Setting: Each subject underwent functional magnetic resonance imaging (fMRI) while performing a dual-joystick, 3D sensorimotor navigation task (simulated orbital docking). Scanning was performed twice per subject, once following a night of normal sleep (NS), and once following a single night of total sleep deprivation (TSD). Five runs (eight 24s docking trials each) were performed during each scanning session. Participants: Six healthy, young, right-handed volunteers (2 women; mean age 20) participated. Measurements and Results: Behavioral performance on multiple measures was comparable in the two sleep conditions. Neuroimaging results within sleep conditions revealed similar locations of peak activity for NS and TSD, including left sensorimotor cortex, left precuneus (BA 7), and right visual areas (BA 18/19). However, cerebral activation following TSD was substantially larger and exhibited higher amplitude modulations from baseline. When directly comparing NS and TSD, most regions exhibited TSD>NS activity, including multiple prefrontal cortical areas (BA 8/9,44/45,47), lateral parieto-occipital areas (BA 19/39, 40), superior temporal cortex (BA 22), and bilateral thalamus and amygdala. Only left parietal cortex (BA 7) demonstrated NS>TSD activity. Conclusions: The large network of cerebral differences between the two conditions, even with comparable behavioral performance, suggests the possibility of detecting TSD-induced stress via functional brain imaging techniques on complex tasks before stress-induced failures.
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Material-specific difficulties in episodic memory tasks in mild traumatic brain injury.
Tsirka, Vassiliki; Simos, Panagiotis; Vakis, Antonios; Vourkas, Michael; Arzoglou, Vasileios; Syrmos, Nikolaos; Stavropoulos, Stavros; Micheloyannis, Sifis
2010-03-01
The study examines acute, material-specific secondary memory performance in 26 patients with mild traumatic brain injury (MTBI) and 26 healthy controls, matched on demographic variables and indexes of crystallized intelligence. Neuropsychological tests were used to evaluate primary and secondary memory, executive functions, and verbal fluency. Participants were also tested on episodic memory tasks involving words, pseudowords, pictures of common objects, and abstract kaleidoscopic images. Patients showed reduced performance on episodic memory measures, and on tasks associated with visuospatial processing and executive function (Trail Making Test part B, semantic fluency). Significant differences between groups were also noted for correct rejections and response bias on the kaleidoscope task. MTBI patients' reduced performance on memory tasks for complex, abstract stimuli can be attributed to a dysfunction in the strategic component of memory process.
Can an Inquiry Approach Improve College Student Learning in a Teaching Laboratory?
Cogan, John G.
2009-01-01
We present an inquiry-based, hands-on laboratory exercise on enzyme activity for an introductory college biology course for science majors. We measure student performance on a series of objective and subjective questions before and after completion of this exercise; we also measure performance of a similar cohort of students before and after completion of an existing, standard, “direct” exercise over the same topics. Although student performance on these questions increased significantly after completion of the inquiry exercise, it did not increase after completion of the control, standard exercise. Pressure to “cover” many complex topics as preparation for high-stakes examinations such as the Medical College Admissions Test may account for persistence of highly efficient, yet dubiously effective “cookbook” laboratory exercises in many science classes. PMID:19255136
NASA Technical Reports Server (NTRS)
Bishu, Ram R.
1992-01-01
Human capabilities such as dexterity, manipulability, and tactile perception are unique and render the hand as a very versatile, effective and a multipurpose tool. This is especially true for unknown environments such as the EVA environment. In the microgravity environment interfaces, procedures, and activities are too complex, diverse, and defy advance definition. Under these conditions the hand becomes the primary means of locomotion, restraint, and material handling. Facilitation of these activities, with simultaneous protection from the cruel EVA environment are the two, often conflicting, objectives of glove design. The objectives of this study was (1) to assess the effects of EVA gloves at different pressures on human hand capabilities, (2) to devise a protocol for evaluating EVA gloves, (3) to develop force time relations for a number of EVA glove pressure combinations, and (4) to evaluate two types of launch and entry suit gloves. The objectives were achieved through three experiments. The experiments for achieving objectives 1, 2, and 3 were performed in the glove box in building 34. In experiment 1 three types of EVA gloves were tested at five pressure differentials. A number of performance measures were recorded. In experiment 2 the same gloves as in experiment 1 were evaluated in a reduced number of pressure conditions. The performance measure was endurance time. Six subjects participated in both the experiments. In experiment 3 two types of launch and entry suit gloves were evaluated using a paradigm similar to experiment 1. Currently the data is being analyzed. However for this report some summary analyses have been performed. The results indicate that a) With EVA gloves strength is reduced by nearly 50 percent, b) performance decrements increase with increasing pressure differential, c) TMG effects are not consistent across the three gloves tested, d) some interesting gender glove interactions were observed, some of which may have been due to the extent (or lack of) fit of the glove to the hand, and e) differences in performance exist between partial pressure suit glove and full pressure suit glove, especially in the unpressurized condition.
Design and Implementation of a Metadata-rich File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ames, S; Gokhale, M B; Maltzahn, C
2010-01-19
Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less
Natural image statistics and low-complexity feature selection.
Vasconcelos, Manuela; Vasconcelos, Nuno
2009-02-01
Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.
Complexity quantification of dense array EEG using sample entropy analysis.
Ramanand, Pravitha; Nampoori, V P N; Sreenivasan, R
2004-09-01
In this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.
Expert systems for superalloy studies
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Kaukler, William F.
1990-01-01
There are many areas in science and engineering which require knowledge of an extremely complex foundation of experimental results in order to design methodologies for developing new materials or products. Superalloys are an area which fit well into this discussion in the sense that they are complex combinations of elements which exhibit certain characteristics. Obviously the use of superalloys in high performance, high temperature systems such as the Space Shuttle Main Engine is of interest to NASA. The superalloy manufacturing process is complex and the implementation of an expert system within the design process requires some thought as to how and where it should be implemented. A major motivation is to develop a methodology to assist metallurgists in the design of superalloy materials using current expert systems technology. Hydrogen embrittlement is disasterous to rocket engines and the heuristics can be very complex. Attacking this problem as one module in the overall design process represents a significant step forward. In order to describe the objectives of the first phase implementation, the expert system was designated Hydrogen Environment Embrittlement Expert System (HEEES).
An applet for the Gabor similarity scaling of the differences between complex stimuli.
Margalit, Eshed; Biederman, Irving; Herald, Sarah B; Yue, Xiaomin; von der Malsburg, Christoph
2016-11-01
It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.
Evers-Casey, Sarah; Graden, Sarah; Schnoll, Robert; Mallya, Giridhar
2015-01-01
Rationale: Tobacco use disproportionately affects the poor, who are, in turn, least likely to receive cessation treatment from providers. Providers caring for low-income populations perform simple components of tobacco use treatment (e.g., assessing tobacco use) with reasonable frequency. However, performance of complex treatment behaviors, such as pharmacologic prescription and follow-up arrangement, remains suboptimal. Objectives: Evaluate the influence of academic detailing (AD), a university-based, noncommercial, educational outreach intervention, on primary care physicians’ complex treatment practice behaviors within an urban care setting. Methods: Trained academic detailers made in-person visits to targeted primary care practices, delivering verbal and written instruction emphasizing three key messages related to tobacco treatment. Physicians’ self-reported frequency of simple and complex treatment behaviors were assessed using a seven-item questionnaire, before and 2 months after AD. Results: Between May 2011 and March 2012, baseline AD visits were made to 217 physicians, 109 (50%) of whom also received follow-up AD. Mean frequency scores for complex behaviors increased significantly, from 2.63 to 2.92, corresponding to a clinically significant 30% increase in the number of respondents who endorsed “almost always” or “always” (P < 0.001). Improvement in mean simple behavior frequency scores was also noted (3.98 vs. 4.13; P = 0.035). Sex and practice type appear to influence reported complex behavior frequency at baseline, whereas only practice type influenced improvement in complex behavior scores at follow up. Conclusions: This study demonstrates the feasibility and potential effectiveness of a low-cost and highly disseminable intervention to improve clinician behavior in the context of treating nicotine dependence in underserved communities. PMID:25867533
[Controlling systems for operating room managers].
Schüpfer, G; Bauer, M; Scherzinger, B; Schleppers, A
2005-08-01
Management means developing, shaping and controlling of complex, productive and social systems. Therefore, operating room managers also need to develop basic skills in financial and managerial accounting as a basis for operative and strategic controlling which is an essential part of their work. A good measurement system should include financial and strategic concepts for market position, innovation performance, productivity, attractiveness, liquidity/cash flow and profitability. Since hospitals need to implement a strategy to reach their business objectives, the performance measurement system has to be individually adapted to the strategy of the hospital. In this respect the navigation system developed by Gälweiler is compared to the "balanced score card" system of Kaplan and Norton.
Meta-T: TetrisⓇ as an experimental paradigm for cognitive skills research.
Lindstedt, John K; Gray, Wayne D
2015-12-01
Studies of human performance in complex tasks using video games are an attractive prospect, but many existing games lack a comprehensive way to modify the game and track performance beyond basic levels of analysis. Meta-T provides experimenters a tool to study behavior in a dynamic task environment with time-stressed decision-making and strong perceptual-motor elements, offering a host of experimental manipulations with a robust and detailed logging system for all user events, system events, and screen objects. Its experimenter-friendly interface provides control over detailed parameters of the task environment without need for programming expertise. Support for eye-tracking and computational cognitive modeling extend the paradigm's scope.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
Dynamic modeling of spacecraft in a collisionless plasma
NASA Technical Reports Server (NTRS)
Katz, I.; Parks, D. E.; Wang, S. S.; Wilson, A.
1977-01-01
A new computational model is described which can simulate the charging of complex geometrical objects in three dimensions. Two sample calculations are presented. In the first problem, the capacitance to infinity of a complex object similar to a satellite with solar array paddles is calculated. The second problem concerns the dynamical charging of a conducting cube partially covered with a thin dielectric film. In this calculation, the photoemission results in differential charging of the object.