Sample records for complex object processing

  1. A SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES

    EPA Science Inventory

    Evaluation and analysis of multiple objectives are very important in designing environmentally benign processes. They require a systematic procedure for solving multi-objective decision-making problems due to the complex nature of the problems and the need for complex assessment....

  2. Direct-to-digital holography reduction of reference hologram noise and fourier space smearing

    DOEpatents

    Voelkl, Edgar

    2006-06-27

    Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.

  3. Processing of spatial and non-spatial information in rats with lesions of the medial and lateral entorhinal cortex: Environmental complexity matters.

    PubMed

    Rodo, Christophe; Sargolini, Francesca; Save, Etienne

    2017-03-01

    The entorhinal-hippocampal circuitry has been suggested to play an important role in episodic memory but the contribution of the entorhinal cortex remains elusive. Predominant theories propose that the medial entorhinal cortex (MEC) processes spatial information whereas the lateral entorhinal cortex (LEC) processes non spatial information. A recent study using an object exploration task has suggested that the involvement of the MEC and LEC spatial and non-spatial information processing could be modulated by the amount of information to be processed, i.e. environmental complexity. To address this hypothesis we used an object exploration task in which rats with excitotoxic lesions of the MEC and LEC had to detect spatial and non-spatial novelty among a set of objects and we varied environmental complexity by decreasing the number of objects or amount of object diversity. Reducing diversity resulted in restored ability to process spatial and non-spatial information in MEC and LEC groups, respectively. Reducing the number of objects yielded restored ability to process non-spatial information in the LEC group but not the ability to process spatial information in the MEC group. The findings indicate that the MEC and LEC are not strictly necessary for spatial and non-spatial processing but that their involvement depends on the complexity of the information to be processed. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Reducing the complexity of the software design process with object-oriented design

    NASA Technical Reports Server (NTRS)

    Schuler, M. P.

    1991-01-01

    Designing software is a complex process. How object-oriented design (OOD), coupled with formalized documentation and tailored object diagraming techniques, can reduce the complexity of the software design process is described and illustrated. The described OOD methodology uses a hierarchical decomposition approach in which parent objects are decomposed into layers of lower level child objects. A method of tracking the assignment of requirements to design components is also included. Increases in the reusability, portability, and maintainability of the resulting products are also discussed. This method was built on a combination of existing technology, teaching experience, consulting experience, and feedback from design method users. The discussed concepts are applicable to hierarchal OOD processes in general. Emphasis is placed on improving the design process by documenting the details of the procedures involved and incorporating improvements into those procedures as they are developed.

  5. SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES

    EPA Science Inventory

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems, due to the complex nature of the problems, the need for complex assessments, and complicated ...

  6. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.

  7. Cultural differences in the lateral occipital complex while viewing incongruent scenes

    PubMed Central

    Yang, Yung-Jui; Goh, Joshua; Hong, Ying-Yi; Park, Denise C.

    2010-01-01

    Converging behavioral and neuroimaging evidence indicates that culture influences the processing of complex visual scenes. Whereas Westerners focus on central objects and tend to ignore context, East Asians process scenes more holistically, attending to the context in which objects are embedded. We investigated cultural differences in contextual processing by manipulating the congruence of visual scenes presented in an fMR-adaptation paradigm. We hypothesized that East Asians would show greater adaptation to incongruent scenes, consistent with their tendency to process contextual relationships more extensively than Westerners. Sixteen Americans and 16 native Chinese were scanned while viewing sets of pictures consisting of a focal object superimposed upon a background scene. In half of the pictures objects were paired with congruent backgrounds, and in the other half objects were paired with incongruent backgrounds. We found that within both the right and left lateral occipital complexes, Chinese participants showed significantly greater adaptation to incongruent scenes than to congruent scenes relative to American participants. These results suggest that Chinese were more sensitive to contextual incongruity than were Americans and that they reacted to incongruent object/background pairings by focusing greater attention on the object. PMID:20083532

  8. A Visual Short-Term Memory Advantage for Objects of Expertise

    ERIC Educational Resources Information Center

    Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel

    2009-01-01

    Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects; this advantage may stem from the holistic nature of face processing. If the holistic processing explains this advantage, object expertise--which also relies on holistic processing--should endow experts…

  9. Object-processing neural efficiency differentiates object from spatial visualizers.

    PubMed

    Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria

    2008-11-19

    The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.

  10. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Visual Short-Term Memory Capacity for Simple and Complex Objects

    ERIC Educational Resources Information Center

    Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto

    2010-01-01

    Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…

  12. Segregation and persistence of form in the lateral occipital complex.

    PubMed

    Ferber, Susanne; Humphrey, G Keith; Vilis, Tutis

    2005-01-01

    While the lateral occipital complex (LOC) has been shown to be implicated in object recognition, it is unclear whether this brain area is responsive to low-level stimulus-driven features or high-level representational processes. We used scrambled shape-from-motion displays to disambiguate the presence of contours from figure-ground segregation and to measure the strength of the binding process for shapes without contours. We found persisting brain activation in the LOC for scrambled displays after the motion stopped indicating that this brain area subserves and maintains figure-ground segregation processes, a low-level function in the object processing hierarchy. In our second experiment, we found that the figure-ground segregation process has some form of spatial constancy indicating top-down influences. The persisting activation after the motion stops suggests an intermediate role in object recognition processes for this brain area and might provide further evidence for the idea that the lateral occipital complex subserves mnemonic functions mediating between iconic and short-term memory.

  13. Visual short-term memory capacity for simple and complex objects.

    PubMed

    Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto

    2010-03-01

    Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not related to storage limitations of VSTM, per se. We used ERPs to track neuronal activity specifically related to retention in VSTM by measuring the sustained posterior contralateral negativity during a change detection task (which required detecting if an item was changed between a memory and a test array). The sustained posterior contralateral negativity, during the retention interval, was larger for complex objects than for simple objects, suggesting that neurons mediating VSTM needed to work harder to maintain more complex objects. This, in turn, is consistent with the view that VSTM capacity depends on complexity.

  14. An assembly process model based on object-oriented hierarchical time Petri Nets

    NASA Astrophysics Data System (ADS)

    Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui

    2017-04-01

    In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.

  15. Fast and Careless or Careful and Slow? Apparent Holistic Processing in Mental Rotation Is Explained by Speed-Accuracy Trade-Offs

    ERIC Educational Resources Information Center

    Liesefeld, Heinrich René; Fu, Xiaolan; Zimmer, Hubert D.

    2015-01-01

    A major debate in the mental-rotation literature concerns the question of whether objects are represented holistically during rotation. Effects of object complexity on rotational speed are considered strong evidence against such holistic representations. In Experiment 1, such an effect of object complexity was markedly present. A closer look on…

  16. Numerical simulation of deformation and failure processes of a complex technical object under impact loading

    NASA Astrophysics Data System (ADS)

    Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.

    2018-04-01

    The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.

  17. The study of cognitive processes in the brain EEG during the perception of bistable images using wavelet skeleton

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Zhuravlev, Maksim O.; Pysarchik, Alexander N.; Khramova, Marina V.; Grubov, Vadim V.

    2017-03-01

    In the paper we study the appearance of the complex patterns in human EEG data during a psychophysiological experiment by stimulating cognitive activity with the perception of ambiguous object. A new method based on the calculation of the maximum energy component for the continuous wavelet transform (skeletons) is proposed. Skeleton analysis allows us to identify specific patterns in the EEG data set, appearing in the perception of ambiguous objects. Thus, it becomes possible to diagnose some cognitive processes associated with the concentration of attention and recognition of complex visual objects. The article presents the processing results of experimental data for 6 male volunteers.

  18. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  19. Heuristics in Managing Complex Clinical Decision Tasks in Experts’ Decision Making

    PubMed Central

    Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme

    2016-01-01

    Background Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. Objective The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. Method After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. Results We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Conclusion Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Application Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design. PMID:27275019

  20. Representing and Learning Complex Object Interactions

    PubMed Central

    Zhou, Yilun; Konidaris, George

    2017-01-01

    We present a framework for representing scenarios with complex object interactions, in which a robot cannot directly interact with the object it wishes to control, but must instead do so via intermediate objects. For example, a robot learning to drive a car can only indirectly change its pose, by rotating the steering wheel. We formalize such complex interactions as chains of Markov decision processes and show how they can be learned and used for control. We describe two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game, and using a hot water dispenser to heat a cup of water. PMID:28593181

  1. Emergence Processes up to Consciousness Using the Multiplicity Principle and Quantum Physics

    NASA Astrophysics Data System (ADS)

    Ehresmann, Andrée C.; Vanbremeersch, Jean-Paul

    2002-09-01

    Evolution is marked by the emergence of new objects and interactions. Pursuing our preceding work on Memory Evolutive Systems (MES; cf. our Internet site), we propose a general mathematical model for this process, based on Category Theory. Its main characteristics is the Multiplicity Principle (MP) which asserts the existence of complex objects with several possible configurations. The MP entails the emergence of non-reducible more and more complex objects (emergentist reductionism). From the laws of Quantum Physics, it follows that the MP is valid for the category of particles and atoms, hence, by complexification, for any natural autonomous anticipatory complex system, such as biological systems up to neural systems, or social systems. Applying the model to the MES of neurons, we describe the emergence of higher and higher cognitive processes and of a semantic memory. Consciousness is characterized by the development of a permanent `personal' memory, the archetypal core, which allows the formation of extended landscapes with an integration of the temporal dimensions.

  2. Studies on combined model based on functional objectives of large scale complex engineering

    NASA Astrophysics Data System (ADS)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  3. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY

    PubMed Central

    Somogyi, Endre; Hagar, Amit; Glazier, James A.

    2017-01-01

    Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379

  4. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.

    PubMed

    Somogyi, Endre; Hagar, Amit; Glazier, James A

    2016-12-01

    Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.

  5. Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?

    PubMed

    Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni

    2015-09-01

    The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).

  6. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    ERIC Educational Resources Information Center

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and…

  7. Energy Center Structure Optimization by using Smart Technologies in Process Control System

    NASA Astrophysics Data System (ADS)

    Shilkina, Svetlana V.

    2018-03-01

    The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.

  8. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  9. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    ERIC Educational Resources Information Center

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  10. Superstructure-based Design and Optimization of Batch Biodiesel Production Using Heterogeneous Catalysts

    NASA Astrophysics Data System (ADS)

    Nuh, M. Z.; Nasir, N. F.

    2017-08-01

    Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.

  11. On the use of multi-agent systems for the monitoring of industrial systems

    NASA Astrophysics Data System (ADS)

    Rezki, Nafissa; Kazar, Okba; Mouss, Leila Hayet; Kahloul, Laid; Rezki, Djamil

    2016-03-01

    The objective of the current paper is to present an intelligent system for complex process monitoring, based on artificial intelligence technologies. This system aims to realize with success all the complex process monitoring tasks that are: detection, diagnosis, identification and reconfiguration. For this purpose, the development of a multi-agent system that combines multiple intelligences such as: multivariate control charts, neural networks, Bayesian networks and expert systems has became a necessity. The proposed system is evaluated in the monitoring of the complex process Tennessee Eastman process.

  12. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  13. A probabilistic framework for identifying biosignatures using Pathway Complexity

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart M.; Murray, Alastair R. G.; Cronin, Leroy

    2017-11-01

    One thing that discriminates living things from inanimate matter is their ability to generate similarly complex or non-random structures in a large abundance. From DNA sequences to folded protein structures, living cells, microbial communities and multicellular structures, the material configurations in biology can easily be distinguished from non-living material assemblies. Many complex artefacts, from ordinary bioproducts to human tools, though they are not living things, are ultimately produced by biological processes-whether those processes occur at the scale of cells or societies, they are the consequences of living systems. While these objects are not living, they cannot randomly form, as they are the product of a biological organism and hence are either technological or cultural biosignatures. A generalized approach that aims to evaluate complex objects as possible biosignatures could be useful to explore the cosmos for new life forms. However, it is not obvious how it might be possible to create such a self-contained approach. This would require us to prove rigorously that a given artefact is too complex to have formed by chance. In this paper, we present a new type of complexity measure, which we call `Pathway Complexity', that allows us not only to threshold the abiotic-biotic divide, but also to demonstrate a probabilistic approach based on object abundance and complexity which can be used to unambiguously assign complex objects as biosignatures. We hope that this approach will not only open up the search for biosignatures beyond the Earth, but also allow us to explore the Earth for new types of biology, and to determine when a complex chemical system discovered in the laboratory could be considered alive. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  14. Information Network Model Query Processing

    NASA Astrophysics Data System (ADS)

    Song, Xiaopu

    Information Networking Model (INM) [31] is a novel database model for real world objects and relationships management. It naturally and directly supports various kinds of static and dynamic relationships between objects. In INM, objects are networked through various natural and complex relationships. INM Query Language (INM-QL) [30] is designed to explore such information network, retrieve information about schema, instance, their attributes, relationships, and context-dependent information, and process query results in the user specified form. INM database management system has been implemented using Berkeley DB, and it supports INM-QL. This thesis is mainly focused on the implementation of the subsystem that is able to effectively and efficiently process INM-QL. The subsystem provides a lexical and syntactical analyzer of INM-QL, and it is able to choose appropriate evaluation strategies and index mechanism to process queries in INM-QL without the user's intervention. It also uses intermediate result structure to hold intermediate query result and other helping structures to reduce complexity of query processing.

  15. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  16. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.

    PubMed

    Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias

    2017-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.

  17. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye

    PubMed Central

    Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias

    2018-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412

  18. Developing authentic clinical simulations for effective listening and communication in pediatric rehabilitation service delivery.

    PubMed

    King, Gillian; Shepherd, Tracy A; Servais, Michelle; Willoughby, Colleen; Bolack, Linda; Strachan, Deborah; Moodie, Sheila; Baldwin, Patricia; Knickle, Kerry; Parker, Kathryn; Savage, Diane; McNaughton, Nancy

    2016-10-01

    To describe the creation and validation of six simulations concerned with effective listening and interpersonal communication in pediatric rehabilitation. The simulations involved clinicians from various disciplines, were based on clinical scenarios related to client issues, and reflected core aspects of listening/communication. Each simulation had a key learning objective, thus focusing clinicians on specific listening skills. The article outlines the process used to turn written scenarios into digital video simulations, including steps taken to establish content validity and authenticity, and to establish a series of videos based on the complexity of their learning objectives, given contextual factors and associated macrocognitive processes that influence the ability to listen. A complexity rating scale was developed and used to establish a gradient of easy/simple, intermediate, and hard/complex simulations. The development process exemplifies an evidence-based, integrated knowledge translation approach to the teaching and learning of listening and communication skills.

  19. THE ROLE OF AN IMMIGRANT MOTHER IN HER ADOLESCENT'S IDENTITY FORMATION: "WHO AM I?".

    PubMed

    Mann, Mali

    2016-06-01

    Immigration is a complex bio-psycho-social process and the immigrant mother has a truly complex task in lending her ego strength to her adolescent offspring. The normal adolescence's decathexis of the love object and the consequent search for a new object may not happen smoothly for those adolescents whose mothers are immigrants. The immigration experience may cause the immigrant mother, who lost her motherland, deeper disturbance in self-identity as well as disequilibrium in her psychic structure, which in turn impacts adversely her adolescent's development. The adolescent's inadequate early experience with an immigrant mother may result in a deeper disturbance in his separation-individuation process as well as his identification process. An immigrant mother who has not mourned adequately, with a different sociocultural background has to go through a far more complex development of motherhood. The case of an adolescent boy, Jason, demonstrates the impact of immigrant motherhood on his ego development.

  20. Atypical Brain Activation during Simple & Complex Levels of Processing in Adult ADHD: An fMRI Study

    ERIC Educational Resources Information Center

    Hale, T. Sigi; Bookheimer, Susan; McGough, James J.; Phillips, Joseph M.; McCracken, James T.

    2007-01-01

    Objective: Executive dysfunction in ADHD is well supported. However, recent studies suggest that more fundamental impairments may be contributing. We assessed brain function in adults with ADHD during simple and complex forms of processing. Method: We used functional magnetic resonance imaging with forward and backward digit spans to investigate…

  1. Secure Design Patterns

    DTIC Science & Technology

    2009-10-01

    Unprivileged Client Process The unprivileged client is responsible for handling the authentication of the user’s request. Be- cause it is not yet known if...define USER_UID 1000 // The location of the empty directory to use as the root directory // for the untrusted child process . #define EMPTY_ROOT_DIR...complex object should be independent of the parts that make up the object and how they are assembled. − The construction process must allow for

  2. Artificial Satellites Observations Using the Complex of Telescopes of RI "MAO"

    NASA Astrophysics Data System (ADS)

    Sybiryakova, Ye. S.; Shulga, O. V.; Vovk, V. S.; Kaliuzny, M. P.; Bushuev, F. I.; Kulichenko, M. O.; Haloley, M. I.; Chernozub, V. M.

    2017-02-01

    Special methods, means and software for cosmic objects' observation and processing of obtained results were developed. Combined method, which consists in separated accumulation of images of reference stars and artificial objects, is the main method used in observations of artificial cosmic objects. It is used for observations of artificial objects at all types of orbits.

  3. The Object Metaphor and Synecdoche in Mathematics Classroom Discourse

    ERIC Educational Resources Information Center

    Font, Vicenc; Godino, Juan D.; Planas, Nuria; Acevedo, Jorge I.

    2010-01-01

    This article describes aspects of classroom discourse, illustrated through vignettes, that reveal the complex relationship between the forms in which mathematical objects exist and their ostensive representations. We illustrate various aspects of the process through which students come to consider the reality of mathematical objects that are…

  4. Object processing in the infant: lessons from neuroscience.

    PubMed

    Wilcox, Teresa; Biondi, Marisa

    2015-07-01

    Object identification is a fundamental cognitive capacity that forms the basis for complex thought and behavior. The adult cortex is organized into functionally distinct visual object-processing pathways that mediate this ability. Insights into the origin of these pathways have begun to emerge through the use of neuroimaging techniques with infant populations. The outcome of this work supports the view that, from the early days of life, object-processing pathways are organized in a way that resembles that of the adult. At the same time, theoretically important changes in patterns of cortical activation are observed during the first year. These findings lead to a new understanding of the cognitive and neural architecture in infants that supports their emerging object-processing capacities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Remembering Complex Objects in Visual Working Memory: Do Capacity Limits Restrict Objects or Features?

    PubMed Central

    Hardman, Kyle; Cowan, Nelson

    2014-01-01

    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739

  6. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  7. The perception of odor objects in everyday life: a review on the processing of odor mixtures

    PubMed Central

    Thomas-Danguin, Thierry; Sinding, Charlotte; Romagny, Sébastien; El Mountassir, Fouzia; Atanasova, Boriana; Le Berre, Elodie; Le Bon, Anne-Marie; Coureaud, Gérard

    2014-01-01

    Smelling monomolecular odors hardly ever occurs in everyday life, and the daily functioning of the sense of smell relies primarily on the processing of complex mixtures of volatiles that are present in the environment (e.g., emanating from food or conspecifics). Such processing allows for the instantaneous recognition and categorization of smells and also for the discrimination of odors among others to extract relevant information and to adapt efficiently in different contexts. The neurophysiological mechanisms underpinning this highly efficient analysis of complex mixtures of odorants is beginning to be unraveled and support the idea that olfaction, as vision and audition, relies on odor-objects encoding. This configural processing of odor mixtures, which is empirically subject to important applications in our societies (e.g., the art of perfumers, flavorists, and wine makers), has been scientifically studied only during the last decades. This processing depends on many individual factors, among which are the developmental stage, lifestyle, physiological and mood state, and cognitive skills; this processing also presents striking similarities between species. The present review gathers the recent findings, as observed in animals, healthy subjects, and/or individuals with affective disorders, supporting the perception of complex odor stimuli as odor objects. It also discusses peripheral to central processing, and cognitive and behavioral significance. Finally, this review highlights that the study of odor mixtures is an original window allowing for the investigation of daily olfaction and emphasizes the need for knowledge about the underlying biological processes, which appear to be crucial for our representation and adaptation to the chemical environment. PMID:24917831

  8. Object individuation is invariant to attentional diffusion: Changes in the size of the attended region do not interact with object-substitution masking.

    PubMed

    Goodhew, Stephanie C; Edwards, Mark

    2016-12-01

    When the human brain is confronted with complex and dynamic visual scenes, two pivotal processes are at play: visual attention (the process of selecting certain aspects of the scene for privileged processing) and object individuation (determining what information belongs to a continuing object over time versus what represents two or more distinct objects). Here we examined whether these processes are independent or whether they interact. Object-substitution masking (OSM) has been used as a tool to examine such questions, however, there is controversy surrounding whether OSM reflects object individuation versus substitution processes. The object-individuation account is agnostic regarding the role of attention, whereas object-substitution theory stipulates a pivotal role for attention. There have been attempts to investigate the role of attention in OSM, but they have been subject to alternative explanations. Here, therefore, we manipulated the size of the attended region, a pure and uncontaminated attentional manipulation, and examined the impact on OSM. Across three experiments, there was no interaction. This refutes the object-substitution theory of OSM. This, in turn, tell us that object-individuation is invariant the distribution of attention. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  10. [Influence of mental rotation of objects on psychophysiological functions of women].

    PubMed

    Chikina, L V; Fedorchuk, S V; Trushina, V A; Ianchuk, P I; Makarchuk, M Iu

    2012-01-01

    An integral part of activity of modern human beings is an involvement to work with the computer systems which, in turn, produces a nervous - emotional tension. Hence, a problem of control of the psychophysiological state of workmen with the purpose of health preservation and success of their activity and the problem of application of rehabilitational actions are actual. At present it is known that the efficiency of rehabilitational procedures rises following application of the complex of regenerative programs. Previously performed by us investigation showed that mental rotation is capable to compensate the consequences of a nervous - emotional tension. Therefore, in the present work we investigated how the complex of spatial tasks developed by us influences psychophysiological performances of tested women for which the psycho-emotional tension with the usage of computer technologies is more essential, and the procedure of mental rotation is more complex task for them, than for men. The complex of spatial tasks applied in the given work included: mental rotation of simple objects (letters and digits), mental rotation of complex objects (geometrical figures) and mental rotation of complex objects with the usage of a short-term memory. Execution of the complex of spatial tasks reduces the time of simple and complex sensomotor response, raises parameters of a short-term memory, brain work capacity and improves nervous processes. Collectively, mental rotation of objects can be recommended as a rehabilitational resource for compensation of consequences of any psycho-emotional strain, both for men, and for women.

  11. Intermediate Traces and Intermediate Learners: Evidence for the Use of Intermediate Structure during Sentence Processing in Second Language French

    ERIC Educational Resources Information Center

    Miller, A. Kate

    2015-01-01

    This study reports on a sentence processing experiment in second language (L2) French that looks for evidence of trace reactivation at clause edge and in the canonical object position in indirect object cleft sentences with complex embedding and cyclic movement. Reaction time (RT) asymmetries were examined among low (n = 20) and high (n = 20)…

  12. Image space subdivision for fast ray tracing

    NASA Astrophysics Data System (ADS)

    Yu, Billy T.; Yu, William W.

    1999-09-01

    Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.

  13. A systems-based approach for integrated design of materials, products and design process chains

    NASA Astrophysics Data System (ADS)

    Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh

    2007-12-01

    The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.

  14. Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Briggs, Jeffery L.

    2008-01-01

    The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.

  15. Feature binding, attention and object perception.

    PubMed Central

    Treisman, A

    1998-01-01

    The seemingly effortless ability to perceive meaningful objects in an integrated scene actually depends on complex visual processes. The 'binding problem' concerns the way in which we select and integrate the separate features of objects in the correct combinations. Experiments suggest that attention plays a central role in solving this problem. Some neurological patients show a dramatic breakdown in the ability to see several objects; their deficits suggest a role for the parietal cortex in the binding process. However, indirect measures of priming and interference suggest that more information may be implicitly available than we can consciously access. PMID:9770223

  16. Reliability Standards of Complex Engineering Systems

    NASA Astrophysics Data System (ADS)

    Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.

    2017-11-01

    Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.

  17. Remembering complex objects in visual working memory: do capacity limits restrict objects or features?

    PubMed

    Hardman, Kyle O; Cowan, Nelson

    2015-03-01

    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli that possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure

    PubMed Central

    2016-01-01

    Background Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. Objective The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. Methods We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. Results We identified 5 high-level macrocognitive processes affecting medication management—sensemaking, planning, coordination, monitoring, and decision making—and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Conclusions Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation. PMID:27733331

  19. Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Ross, Brian; Imada, Janine

    Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.

  20. EPR Characterization of Dinitrosyl Iron Complexes with Thiol-Containing Ligands as an Approach to Their Identification in Biological Objects: An Overview.

    PubMed

    Vanin, Anatoly F

    2018-06-01

    The overview demonstrates how the use of only one physico-chemical approach, viz., the electron paramagnetic resonance method, allowed detection and identification of dinitrosyl iron complexes with thiol-containing ligands in various animal and bacterial cells. These complexes are formed in biological objects in the paramagnetic (electron paramagnetic resonance-active) mononuclear and diamagnetic (electron paramagnetic resonance-silent) binuclear forms and control the activity of nitrogen monoxide, one of the most universal regulators of metabolic processes in the organism. The analysis of electronic and spatial structures of dinitrosyl iron complex sheds additional light on the mechanism whereby dinitrosyl iron complex with thiol-containing ligands function in human and animal cells as donors of nitrogen monoxide and its ionized form, viz., nitrosonium ions (NO + ).

  1. Conjunctive Coding of Complex Object Features

    PubMed Central

    Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.

    2016-01-01

    Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583

  2. Selective visual attention in object detection processes

    NASA Astrophysics Data System (ADS)

    Paletta, Lucas; Goyal, Anurag; Greindl, Christian

    2003-03-01

    Object detection is an enabling technology that plays a key role in many application areas, such as content based media retrieval. Attentive cognitive vision systems are here proposed where the focus of attention is directed towards the most relevant target. The most promising information is interpreted in a sequential process that dynamically makes use of knowledge and that enables spatial reasoning on the local object information. The presented work proposes an innovative application of attention mechanisms for object detection which is most general in its understanding of information and action selection. The attentive detection system uses a cascade of increasingly complex classifiers for the stepwise identification of regions of interest (ROIs) and recursively refined object hypotheses. While the most coarse classifiers are used to determine first approximations on a region of interest in the input image, more complex classifiers are used for more refined ROIs to give more confident estimates. Objects are modelled by local appearance based representations and in terms of posterior distributions of the object samples in eigenspace. The discrimination function to discern between objects is modeled by a radial basis functions (RBF) network that has been compared with alternative networks and been proved consistent and superior to other artifical neural networks for appearance based object recognition. The experiments were led for the automatic detection of brand objects in Formula One broadcasts within the European Commission's cognitive vision project DETECT.

  3. Reengineering the JPL Spacecraft Design Process

    NASA Technical Reports Server (NTRS)

    Briggs, C.

    1995-01-01

    This presentation describes the factors that have emerged in the evolved process of reengineering the unmanned spacecraft design process at the Jet Propulsion Laboratory in Pasadena, California. Topics discussed include: New facilities, new design factors, new system-level tools, complex performance objectives, changing behaviors, design integration, leadership styles, and optimization.

  4. Multi-Scale and Object-Oriented Analysis for Mountain Terrain Segmentation and Geomorphological Assessment

    NASA Astrophysics Data System (ADS)

    Marston, B. K.; Bishop, M. P.; Shroder, J. F.

    2009-12-01

    Digital terrain analysis of mountain topography is widely utilized for mapping landforms, assessing the role of surface processes in landscape evolution, and estimating the spatial variation of erosion. Numerous geomorphometry techniques exist to characterize terrain surface parameters, although their utility to characterize the spatial hierarchical structure of the topography and permit an assessment of the erosion/tectonic impact on the landscape is very limited due to scale and data integration issues. To address this problem, we apply scale-dependent geomorphometric and object-oriented analyses to characterize the hierarchical spatial structure of mountain topography. Specifically, we utilized a high resolution digital elevation model to characterize complex topography in the Shimshal Valley in the Western Himalaya of Pakistan. To accomplish this, we generate terrain objects (geomorphological features and landform) including valley floors and walls, drainage basins, drainage network, ridge network, slope facets, and elemental forms based upon curvature. Object-oriented analysis was used to characterize object properties accounting for object size, shape, and morphometry. The spatial overlay and integration of terrain objects at various scales defines the nature of the hierarchical organization. Our results indicate that variations in the spatial complexity of the terrain hierarchical organization is related to the spatio-temporal influence of surface processes and landscape evolution dynamics. Terrain segmentation and the integration of multi-scale terrain information permits further assessment of process domains and erosion, tectonic impact potential, and natural hazard potential. We demonstrate this with landform mapping and geomorphological assessment examples.

  5. Fault-tolerant wait-free shared objects

    NASA Technical Reports Server (NTRS)

    Jayanti, Prasad; Chandra, Tushar D.; Toueg, Sam

    1992-01-01

    A concurrent system consists of processes communicating via shared objects, such as shared variables, queues, etc. The concept of wait-freedom was introduced to cope with process failures: each process that accesses a wait-free object is guaranteed to get a response even if all the other processes crash. However, if a wait-free object 'crashes,' all the processes that access that object are prevented from making progress. In this paper, we introduce the concept of fault-tolerant wait-free objects, and study the problem of implementing them. We give a universal method to construct fault-tolerant wait-free objects, for all types of 'responsive' failures (including one in which faulty objects may 'lie'). In sharp contrast, we prove that many common and interesting types (such as queues, sets, and test&set) have no fault-tolerant wait-free implementations even under the most benign of the 'non-responsive' types of failure. We also introduce several concepts and techniques that are central to the design of fault-tolerant concurrent systems: the concepts of self-implementation and graceful degradation, and techniques to automatically increase the fault-tolerance of implementations. We prove matching lower bounds on the resource complexity of most of our algorithms.

  6. Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.

    PubMed

    Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M

    2016-12-01

    Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.

  7. Representing Energy. II. Energy Tracking Representations

    ERIC Educational Resources Information Center

    Scherr, Rachel E.; Close, Hunter G.; Close, Eleanor W.; Vokos, Stamatis

    2012-01-01

    The Energy Project at Seattle Pacific University has developed representations that embody the substance metaphor and support learners in conserving and tracking energy as it flows from object to object and changes form. Such representations enable detailed modeling of energy dynamics in complex physical processes. We assess student learning by…

  8. On Design Mining: Coevolution and Surrogate Models.

    PubMed

    Preen, Richard J; Bull, Larry

    2017-01-01

    Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this article, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design-threads due to the overall complexity of the task. Using an abstract, tunable model of coevolution, we consider strategies to sample subthread designs for whole-system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, we then describe the effective design of an array of six heterogeneous vertical-axis wind turbines.

  9. Games and Simulation.

    ERIC Educational Resources Information Center

    Abt, Clark C.

    Educational games present the complex realities of simultaneous interactive processes more accurately and effectively than serial processes such as lecturing and reading. Objectives of educational gaming are to motivate students by presenting relevant and realistic problems and to induce more efficient and active understanding of information.…

  10. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial metadata, task XML object, and WMS/WFS cartographical services interconnects metadata and GUI tiers. The methods include such procedures as JSON metadata downloading and update, launching and tracking of the calculation task running on the remote servers as well as working with WMS/WFS cartographical services including: obtaining the list of available layers, visualizing layers on the map, exporting layers in graphical (PNG, JPG, GeoTIFF), vector (KML, GML, Shape) and digital (NetCDF) formats. Graphical user interface tier is based on the bundle of JavaScript libraries (OpenLayers, GeoExt and ExtJS) and represents a set of software components implementing web mapping application business logic (complex menus, toolbars, wizards, event handlers, etc.). GUI provides two basic capabilities for the end user: configuring the task XML object functionality and cartographical information visualizing. The web interface developed is similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Web mapping system developed has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical form. The work is supported by SB RAS Basic Program Projects VIII.80.2.1 and IV.38.1.7.

  11. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  12. Pupillary dynamics reveal computational cost in sentence planning.

    PubMed

    Sevilla, Yamila; Maldonado, Mora; Shalóm, Diego E

    2014-01-01

    This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production.

  13. The Implementation of Team-Based Discovery Learning to Improve Students' Ability in Writing Research Proposal

    ERIC Educational Resources Information Center

    Arifani, Yudhi

    2016-01-01

    Writing research proposal in educational setting is a very complex process involving variety of elements. Consequently, analyzing the complex elements from introduction to data analysis sections in order to yield convinced research proposal writing through reviewing reputable journal articles is worth-contributing. The objectives of this research…

  14. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  15. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  16. Make It Short and Easy: Username Complexity Determines Trustworthiness Above and Beyond Objective Reputation

    PubMed Central

    Silva, Rita R.; Chrobot, Nina; Newman, Eryn; Schwarz, Norbert; Topolinski, Sascha

    2017-01-01

    Can the mere name of a seller determine his trustworthiness in the eye of the consumer? In 10 studies (total N = 608) we explored username complexity and trustworthiness of eBay seller profiles. Name complexity was manipulated through variations in username pronounceability and length. These dimensions had strong, independent effects on trustworthiness, with sellers with easy-to-pronounce or short usernames being rated as more trustworthy than sellers with difficult-to-pronounce or long usernames, respectively. Both effects were repeatedly found even when objective information about seller reputation was available. We hypothesized the effect of name complexity on trustworthiness to be based on the experience of high vs. low processing fluency, with little awareness of the underlying process. Supporting this, participants could not correct for the impact of username complexity when explicitly asked to do so. Three alternative explanations based on attributions of the variations in name complexity to seller origin (ingroup vs. outgroup), username generation method (seller personal choice vs. computer algorithm) and age of the eBay profiles (10 years vs. 1 year) were tested and ruled out. Finally, we show that manipulating the ease of reading product descriptions instead of the sellers’ names also impacts the trust ascribed to the sellers. PMID:29312062

  17. Sculplexity: Sculptures of Complexity using 3D printing

    NASA Astrophysics Data System (ADS)

    Reiss, D. S.; Price, J. J.; Evans, T. S.

    2013-11-01

    We show how to convert models of complex systems such as 2D cellular automata into a 3D printed object. Our method takes into account the limitations inherent to 3D printing processes and materials. Our approach automates the greater part of this task, bypassing the use of CAD software and the need for manual design. As a proof of concept, a physical object representing a modified forest fire model was successfully printed. Automated conversion methods similar to the ones developed here can be used to create objects for research, for demonstration and teaching, for outreach, or simply for aesthetic pleasure. As our outputs can be touched, they may be particularly useful for those with visual disabilities.

  18. Object-oriented Bayesian networks for paternity cases with allelic dependencies

    PubMed Central

    Hepler, Amanda B.; Weir, Bruce S.

    2008-01-01

    This study extends the current use of Bayesian networks by incorporating the effects of allelic dependencies in paternity calculations. The use of object-oriented networks greatly simplify the process of building and interpreting forensic identification models, allowing researchers to solve new, more complex problems. We explore two paternity examples: the most common scenario where DNA evidence is available from the alleged father, the mother and the child; a more complex casewhere DNA is not available from the alleged father, but is available from the alleged father’s brother. Object-oriented networks are built, using HUGIN, for each example which incorporate the effects of allelic dependence caused by evolutionary relatedness. PMID:19079769

  19. Changing Times, Complex Decisions: Presidential Values and Decision Making

    ERIC Educational Resources Information Center

    Hornak, Anne M.; Garza Mitchell, Regina L.

    2016-01-01

    Objective: The objective of this article is to delve more deeply into the thought processes of the key decision makers at community colleges and understand how they make decisions. Specifically, this article focuses on the role of the community college president's personal values in decision making. Method: We conducted interviews with 13…

  20. Challenges to the development of complex virtual reality surgical simulations.

    PubMed

    Seymour, N E; Røtnes, J S

    2006-11-01

    Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.

  1. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  2. Energy absorption capabilities of complex thin walled structures

    NASA Astrophysics Data System (ADS)

    Tarlochan, F.; AlKhatib, Sami

    2017-10-01

    Thin walled structures have been used in the area of energy absorption during an event of a crash. A lot of work has been done on tubular structures. Due to limitation of manufacturing process, complex geometries were dismissed as potential solutions. With the advancement in metal additive manufacturing, complex geometries can be realized. As a motivation, the objective of this study is to investigate computationally the crash performance of complex tubular structures. Five designs were considered. In was found that complex geometries have better crashworthiness performance than standard tubular structures used currently.

  3. Feature integration and object representations along the dorsal stream visual hierarchy

    PubMed Central

    Perry, Carolyn Jeane; Fallah, Mazyar

    2014-01-01

    The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147

  4. The IRMIS object model and services API.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Dohan, D. A.; Arnold, N. D.

    2005-01-01

    The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is wellmore » underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not generate EPICS control system configurations from IRMIS, and hence do not impose any additional requirements on EPICS developers.« less

  5. ERPs Differentially Reflect Automatic and Deliberate Processing of the Functional Manipulability of Objects

    PubMed Central

    Madan, Christopher R.; Chen, Yvonne Y.; Singhal, Anthony

    2016-01-01

    It is known that the functional properties of an object can interact with perceptual, cognitive, and motor processes. Previously we have found that a between-subjects manipulation of judgment instructions resulted in different manipulability-related memory biases in an incidental memory test. To better understand this effect we recorded electroencephalography (EEG) while participants made judgments about images of objects that were either high or low in functional manipulability (e.g., hammer vs. ladder). Using a between-subjects design, participants judged whether they had seen the object recently (Personal Experience), or could manipulate the object using their hand (Functionality). We focused on the P300 and slow-wave event-related potentials (ERPs) as reflections of attentional allocation. In both groups, we observed higher P300 and slow wave amplitudes for high-manipulability objects at electrodes Pz and C3. As P300 is thought to reflect bottom-up attentional processes, this may suggest that the processing of high-manipulability objects recruited more attentional resources. Additionally, the P300 effect was greater in the Functionality group. A more complex pattern was observed at electrode C3 during slow wave: processing the high-manipulability objects in the Functionality instruction evoked a more positive slow wave than in the other three conditions, likely related to motor simulation processes. These data provide neural evidence that effects of manipulability on stimulus processing are further mediated by automatic vs. deliberate motor-related processing. PMID:27536224

  6. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    PubMed

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  7. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum

    PubMed Central

    2012-01-01

    Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013

  8. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  9. Complexity of line-seru conversion for different scheduling rules and two improved exact algorithms for the multi-objective optimization.

    PubMed

    Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei

    2016-01-01

    Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.

  10. The Defense Industrial Base: Prescription for a Psychosomatic Ailment

    DTIC Science & Technology

    1983-08-01

    The Decision- Making Process ------------------------- 65 Notes ---------------------------------------- FIGURE 4-1. The Decision [laking Process...the strategy and tactics process to make certain that we can attain out national security objectives. (IFP is also known as mobilization planning or...decision- making model that could improve the capacity and capability-of the military-industrial complex, thereby increasing the probability of success

  11. The Sentence-Composition Effect: Processing of Complex Sentences Depends on the Configuration of Common Noun Phrases versus Unusual Noun Phrases

    ERIC Educational Resources Information Center

    Johnson, Marcus L.; Lowder, Matthew W.; Gordon, Peter C.

    2011-01-01

    In 2 experiments, the authors used an eye tracking while reading methodology to examine how different configurations of common noun phrases versus unusual noun phrases (NPs) influenced the difference in processing difficulty between sentences containing object- and subject-extracted relative clauses. Results showed that processing difficulty was…

  12. Much Needed Structure [Structured Decision-Making with DMRCS. Define-Measure-Reduce-Combine-Select

    DOE PAGES

    Anderson-Cook, Christine M.; Lu, Lu

    2015-10-01

    We have described a new DMRCS process for structured decision making, which mirrors the approach of the DMAIC process which has become so popular within Lean Six Sigma. By dividing a complex often unstructured process into distinct steps, we hope to have made the task of balancing multiple competing objectives less daunting.

  13. Exploring Use of Climate Information in Wildland Fire Management: A Decision Calendar Study

    Treesearch

    Thomas W. Corringham; Anthony L. Westerling; Barbara J. Morehouse

    2006-01-01

    Wildfire management is an institutionally complex process involving a complex budget and appropriations cycle, a variety of objectives, and a set of internal and external political constraints. Significant potential exists for enhancing the use of climate information and long-range climate forecasts in wildland fire management in the Western U.S. Written surveys and...

  14. Network-Oriented Approach to Distributed Generation Planning

    NASA Astrophysics Data System (ADS)

    Kochukov, O.; Mutule, A.

    2017-06-01

    The main objective of the paper is to present an innovative complex approach to distributed generation planning and show the advantages over existing methods. The approach will be most suitable for DNOs and authorities and has specific calculation targets to support the decision-making process. The method can be used for complex distribution networks with different arrangement and legal base.

  15. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  16. How to build a course in mathematical-biological modeling: content and processes for knowledge and skill.

    PubMed

    Hoskinson, Anne-Marie

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical-biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments.

  17. How to Build a Course in Mathematical–Biological Modeling: Content and Processes for Knowledge and Skill

    PubMed Central

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical–biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments. PMID:20810966

  18. Cone beam volume tomography: an imaging option for diagnosis of complex mandibular third molar anatomical relationships.

    PubMed

    Danforth, Robert A; Peck, Jerry; Hall, Paul

    2003-11-01

    Complex impacted third molars present potential treatment complications and possible patient morbidity. Objectives of diagnostic imaging are to facilitate diagnosis, decision making, and enhance treatment outcomes. As cases become more complex, advanced multiplane imaging methods allowing for a 3-D view are more likely to meet these objectives than traditional 2-D radiography. Until recently, advanced imaging options were somewhat limited to standard film tomography or medical CT, but development of cone beam volume tomography (CBVT) multiplane 3-D imaging systems specifically for dental use now provides an alternative imaging option. Two cases were utilized to compare the role of CBVT to these other imaging options and to illustrate how multiplane visualization can assist the pretreatment evaluation and decision-making process for complex impacted mandibular third molar cases.

  19. Heuristics in Managing Complex Clinical Decision Tasks in Experts' Decision Making.

    PubMed

    Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme

    2014-09-01

    Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design.

  20. Modelling and Simulation as a Recognizing Method in Education

    ERIC Educational Resources Information Center

    Stoffa, Veronika

    2004-01-01

    Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…

  1. Development of a Support Tool for Complex Decision-Making in the Provision of Rural Maternity Care

    PubMed Central

    Hearns, Glen; Klein, Michael C.; Trousdale, William; Ulrich, Catherine; Butcher, David; Miewald, Christiana; Lindstrom, Ronald; Eftekhary, Sahba; Rosinski, Jessica; Gómez-Ramírez, Oralia; Procyk, Andrea

    2010-01-01

    Context: Decisions in the organization of safe and effective rural maternity care are complex, difficult, value laden and fraught with uncertainty, and must often be based on imperfect information. Decision analysis offers tools for addressing these complexities in order to help decision-makers determine the best use of resources and to appreciate the downstream effects of their decisions. Objective: To develop a maternity care decision-making tool for the British Columbia Northern Health Authority (NH) for use in low birth volume settings. Design: Based on interviews with community members, providers, recipients and decision-makers, and employing a formal decision analysis approach, we sought to clarify the influences affecting rural maternity care and develop a process to generate a set of value-focused objectives for use in designing and evaluating rural maternity care alternatives. Setting: Four low-volume communities with variable resources (with and without on-site births, with or without caesarean section capability) were chosen. Participants: Physicians (20), nurses (18), midwives and maternity support service providers (4), local business leaders, economic development officials and elected officials (12), First Nations (women [pregnant and non-pregnant], chiefs and band members) (40), social workers (3), pregnant women (2) and NH decision-makers/administrators (17). Results: We developed a Decision Support Manual to assist with assessing community needs and values, context for decision-making, capacity of the health authority or healthcare providers, identification of key objectives for decision-making, developing alternatives for care, and a process for making trade-offs and balancing multiple objectives. The manual was deemed an effective tool for the purpose by the client, NH. Conclusions: Beyond assisting the decision-making process itself, the methodology provides a transparent communication tool to assist in making difficult decisions. While the manual was specifically intended to deal with rural maternity issues, the NH decision-makers feel the method can be easily adapted to assist decision-making in other contexts in medicine where there are conflicting objectives, values and opinions. Decisions on the location of new facilities or infrastructure, or enhancing or altering services such as surgical or palliative care, would be examples of complex decisions that might benefit from this methodology. PMID:21286270

  2. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  3. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  4. Developing a framework for qualitative engineering: Research in design and analysis of complex structural systems

    NASA Technical Reports Server (NTRS)

    Franck, Bruno M.

    1990-01-01

    The research is focused on automating the evaluation of complex structural systems, whether for the design of a new system or the analysis of an existing one, by developing new structural analysis techniques based on qualitative reasoning. The problem is to identify and better understand: (1) the requirements for the automation of design, and (2) the qualitative reasoning associated with the conceptual development of a complex system. The long-term objective is to develop an integrated design-risk assessment environment for the evaluation of complex structural systems. The scope of this short presentation is to describe the design and cognition components of the research. Design has received special attention in cognitive science because it is now identified as a problem solving activity that is different from other information processing tasks (1). Before an attempt can be made to automate design, a thorough understanding of the underlying design theory and methodology is needed, since the design process is, in many cases, multi-disciplinary, complex in size and motivation, and uses various reasoning processes involving different kinds of knowledge in ways which vary from one context to another. The objective is to unify all the various types of knowledge under one framework of cognition. This presentation focuses on the cognitive science framework that we are using to represent the knowledge aspects associated with the human mind's abstraction abilities and how we apply it to the engineering knowledge and engineering reasoning in design.

  5. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  6. Camouflage and visual perception

    PubMed Central

    Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt

    2008-01-01

    How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671

  7. A visual short-term memory advantage for objects of expertise

    PubMed Central

    Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel

    2014-01-01

    Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects, an advantage that may stem from the holistic nature of face processing. If the holistic processing explains this advantage, then object expertise—which also relies on holistic processing—should endow experts with a VSTM advantage. We compared VSTM for cars among car experts to that among car novices. Car experts, but not car novices, demonstrated a VSTM advantage similar to that for faces; this advantage was orientation-specific and was correlated with an individual's level of car expertise. Control experiments ruled out accounts based solely on verbal- or long-term memory representations. These findings suggest that the processing advantages afforded by visual expertise result in domain-specific increases in VSTM capacity, perhaps by allowing experts to maximize the use of an inherently limited VSTM system. PMID:19170473

  8. Strength and coherence of binocular rivalry depends on shared stimulus complexity.

    PubMed

    Alais, David; Melcher, David

    2007-01-01

    Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.

  9. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  10. Dopamine D1 receptor activation leads to object recognition memory in a coral reef fish.

    PubMed

    Hamilton, Trevor J; Tresguerres, Martin; Kline, David I

    2017-07-01

    Object recognition memory is the ability to identify previously seen objects and is an adaptive mechanism that increases survival for many species throughout the animal kingdom. Previously believed to be possessed by only the highest order mammals, it is now becoming clear that fish are also capable of this type of memory formation. Similar to the mammalian hippocampus, the dorsolateral pallium regulates distinct memory processes and is modulated by neurotransmitters such as dopamine. Caribbean bicolour damselfish ( Stegastes partitus ) live in complex environments dominated by coral reef structures and thus likely possess many types of complex memory abilities including object recognition. This study used a novel object recognition test in which fish were first presented two identical objects, then after a retention interval of 10 min with no objects, the fish were presented with a novel object and one of the objects they had previously encountered in the first trial. We demonstrate that the dopamine D 1 -receptor agonist (SKF 38393) induces the formation of object recognition memories in these fish. Thus, our results suggest that dopamine-receptor mediated enhancement of spatial memory formation in fish represents an evolutionarily conserved mechanism in vertebrates. © 2017 The Author(s).

  11. 3D Printing: Downstream Production Transforming the Supply Chain

    DTIC Science & Technology

    2017-01-01

    generative designs , and tailorable material properties will transform the way both military and civilian products are manufactured —from simple objects... design . Traditional and established subtractive manufacturing (SM) creates objects by removing material (e.g., through drilling or lathing) from solid... manufacturers to build products with highly complex geometry in a single process rather than by combining multiple components manufactured by

  12. 3D Printing of Shape Memory Polymers for Flexible Electronic Devices.

    PubMed

    Zarek, Matt; Layani, Michael; Cooperstein, Ido; Sachyani, Ela; Cohn, Daniel; Magdassi, Shlomo

    2016-06-01

    The formation of 3D objects composed of shape memory polymers for flexible electronics is described. Layer-by-layer photopolymerization of methacrylated semicrystalline molten macromonomers by a 3D digital light processing printer enables rapid fabrication of complex objects and imparts shape memory functionality for electrical circuits. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A novel knowledge-based system for interpreting complex engineering drawings: theory, representation, and implementation.

    PubMed

    Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie

    2009-08-01

    We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.

  14. Using simple manipulatives to improve student comprehension of a complex biological process: protein synthesis.

    PubMed

    Guzman, Karen; Bartlett, John

    2012-01-01

    Biological systems and living processes involve a complex interplay of biochemicals and macromolecular structures that can be challenging for undergraduate students to comprehend and, thus, misconceptions abound. Protein synthesis, or translation, is an example of a biological process for which students often hold many misconceptions. This article describes an exercise that was developed to illustrate the process of translation using simple objects to represent complex molecules. Animations, 3D physical models, computer simulations, laboratory experiments and classroom lectures are also used to reinforce the students' understanding of translation, but by focusing on the simple manipulatives in this exercise, students are better able to visualize concepts that can elude them when using the other methods. The translation exercise is described along with suggestions for background material, questions used to evaluate student comprehension and tips for using the manipulatives to identify common misconceptions. Copyright © 2012 Wiley Periodicals, Inc.

  15. Auditory memory can be object based.

    PubMed

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  16. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  17. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  18. Preserved Haptic Shape Processing after Bilateral LOC Lesions.

    PubMed

    Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C

    2015-10-07

    The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.

  19. A high-level object-oriented model for representing relationships in an electronic medical record.

    PubMed Central

    Dolin, R. H.

    1994-01-01

    The importance of electronic medical records to improve the quality and cost-effectiveness of medical care continues to be realized. This growing importance has spawned efforts at defining the structure and content of medical data, which is heterogeneous, highly inter-related, and complex. Computer-assisted data modeling tools have greatly facilitated the process of representing medical data, however the complex inter-relationships of medical information can result in data models that are large and cumbersome to manipulate and view. This report presents a high-level object-oriented model for representing the relationships between objects or entities that might exist in an electronic medical record. By defining the relationship between objects at a high level and providing for inheritance, this model enables relating any medical entity to any other medical entity, even though the relationships were not directly specified or known during data model design. PMID:7949981

  20. The Time-Course of Ultrarapid Categorization: The Influence of Scene Congruency and Top-Down Processing.

    PubMed

    Vanmarcke, Steven; Calders, Filip; Wagemans, Johan

    2016-01-01

    Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.

  1. The Time-Course of Ultrarapid Categorization: The Influence of Scene Congruency and Top-Down Processing

    PubMed Central

    Calders, Filip; Wagemans, Johan

    2016-01-01

    Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur. PMID:27803794

  2. Automatic Adviser on Mobile Objects Status Identification and Classification

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Saryan, A. S.

    2018-05-01

    A mobile object status identification task is defined within the image discrimination theory. It is proposed to classify objects into three classes: object operation status; its maintenance is required and object should be removed from the production process. Two methods were developed to construct the separating boundaries between the designated classes: a) using statistical information on the research objects executed movement, b) basing on regulatory documents and expert commentary. Automatic Adviser operation simulation and the operation results analysis complex were synthesized. Research results are commented using a specific example of cuts rolling from the hump yard. The work was supported by Russian Fundamental Research Fund, project No. 17-20-01040.

  3. Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems

    NASA Technical Reports Server (NTRS)

    Esogbue, Augustine O.

    1998-01-01

    The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of these are in progress in our laboratory while others await additional support. All of these enhancements will improve the attractiveness of the controller as an effective tool for the on line control of an array of complex process environments.

  4. Influence of the preparation method on the physicochemical properties of indomethacin and methyl-β-cyclodextrin complexes.

    PubMed

    Rudrangi, Shashi Ravi Suman; Bhomia, Ruchir; Trivedi, Vivek; Vine, George J; Mitchell, John C; Alexander, Bruce David; Wicks, Stephen Richard

    2015-02-20

    The main objective of this study was to investigate different manufacturing processes claimed to promote inclusion complexation between indomethacin and cyclodextrins in order to enhance the apparent solubility and dissolution properties of indomethacin. Especially, the effectiveness of supercritical carbon dioxide processing for preparing solid drug-cyclodextrin inclusion complexes was investigated and compared to other preparation methods. The complexes were prepared by physical mixing, co-evaporation, freeze drying from aqueous solution, spray drying and supercritical carbon dioxide processing methods. The prepared complexes were then evaluated by scanning electron microscopy, differential scanning calorimetry, X-ray powder diffraction, solubility and dissolution studies. The method of preparation of the inclusion complexes was shown to influence the physicochemical properties of the formed complexes. Indomethacin exists in a highly crystalline solid form. Physical mixing of indomethacin and methyl-β-cyclodextrin appeared not to reduce the degree of crystallinity of the drug. The co-evaporated and freeze dried complexes had a lower degree of crystallinity than the physical mix; however the lowest degree of crystallinity was achieved in complexes prepared by spray drying and supercritical carbon dioxide processing methods. All systems based on methyl-β-cyclodextrin exhibited better dissolution properties than the drug alone. The greatest improvement in drug dissolution properties was obtained from complexes prepared using supercritical carbon dioxide processing, thereafter by spray drying, freeze drying, co-evaporation and finally by physical mixing. Supercritical carbon dioxide processing is well known as an energy efficient alternative to other pharmaceutical processes and may have application for the preparation of solid-state drug-cyclodextrin inclusion complexes. It is an effective and economic method that allows the formation of solid complexes with a high yield, without the use of organic solvents and problems associated with their residues. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Optical apparatus for laser scattering by objects having complex shapes

    DOEpatents

    Ellingson, William A.; Visher, Robert J.

    2006-11-14

    Apparatus for observing and measuring in realtime surface and subsurface characteristics of objects having complex shapes includes an optical fiber bundle having first and second opposed ends. The first end includes a linear array of fibers, where the ends of adjacent fibers are in contact and are aligned perpendicular to the surface of the object being studied. The second ends of some of the fibers are in the form of a polished ferrule forming a multi-fiber optical waveguide for receiving laser light. The second ends of the remaining fibers are formed into a linear array suitable for direct connection to a detector, such as a linear CMOS-based optical detector. The output data is analyzed using digital signal processing for the detection of anomalies such as cracks, voids, inclusions and other defects.

  6. Additive Manufacturing of Transparent Silica Glass from Solutions.

    PubMed

    Cooperstein, Ido; Shukrun, Efrat; Press, Ofir; Kamyshny, Alexander; Magdassi, Shlomo

    2018-06-06

    A sol, aqueous solution-based ink is presented for fabrication of 3D transparent silica glass objects with complex geometries, by a simple 3D printing process conducted at room temperature. The ink combines a hybrid ceramic precursor that can undergo both the photopolymerization reaction and a sol-gel process, both in the solution form, without any particles. The printing is conducted by localized photopolymerization with the use of a low-cost 3D printer. Following printing, upon aging and densifying, the resulting objects convert from a gel to a xerogel and then to a fused silica. The printed objects, which are composed of fused silica, are transparent and have tunable density and refractive indices.

  7. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  8. Towards a voxel-based geographic automata for the simulation of geospatial processes

    NASA Astrophysics Data System (ADS)

    Jjumba, Anthony; Dragićević, Suzana

    2016-07-01

    Many geographic processes evolve in a three dimensional space and time continuum. However, when they are represented with the aid of geographic information systems (GIS) or geosimulation models they are modelled in a framework of two-dimensional space with an added temporal component. The objective of this study is to propose the design and implementation of voxel-based automata as a methodological approach for representing spatial processes evolving in the four-dimensional (4D) space-time domain. Similar to geographic automata models which are developed to capture and forecast geospatial processes that change in a two-dimensional spatial framework using cells (raster geospatial data), voxel automata rely on the automata theory and use three-dimensional volumetric units (voxels). Transition rules have been developed to represent various spatial processes which range from the movement of an object in 3D to the diffusion of airborne particles and landslide simulation. In addition, the proposed 4D models demonstrate that complex processes can be readily reproduced from simple transition functions without complex methodological approaches. The voxel-based automata approach provides a unique basis to model geospatial processes in 4D for the purpose of improving representation, analysis and understanding their spatiotemporal dynamics. This study contributes to the advancement of the concepts and framework of 4D GIS.

  9. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  10. Cyber-physical approach to the network-centric robotics control task

    NASA Astrophysics Data System (ADS)

    Muliukha, Vladimir; Ilyashenko, Alexander; Zaborovsky, Vladimir; Lukashin, Alexey

    2016-10-01

    Complex engineering tasks concerning control for groups of mobile robots are developed poorly. In our work for their formalization we use cyber-physical approach, which extends the range of engineering and physical methods for a design of complex technical objects by researching the informational aspects of communication and interaction between objects and with an external environment [1]. The paper analyzes network-centric methods for control of cyber-physical objects. Robots or cyber-physical objects interact with each other by transmitting information via computer networks using preemptive queueing system and randomized push-out mechanism [2],[3]. The main field of application for the results of our work is space robotics. The selection of cyber-physical systems as a special class of designed objects is due to the necessity of integrating various components responsible for computing, communications and control processes. Network-centric solutions allow using universal means for the organization of information exchange to integrate different technologies for the control system.

  11. The evolution of meaning: spatio-temporal dynamics of visual object recognition.

    PubMed

    Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K

    2011-08-01

    Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.

  12. Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity

    PubMed Central

    Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.

    2013-01-01

    Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009

  13. A Chemical Engineer's Perspective on Health and Disease

    PubMed Central

    Androulakis, Ioannis P.

    2014-01-01

    Chemical process systems engineering considers complex supply chains which are coupled networks of dynamically interacting systems. The quest to optimize the supply chain while meeting robustness and flexibility constraints in the face of ever changing environments necessitated the development of theoretical and computational tools for the analysis, synthesis and design of such complex engineered architectures. However, it was realized early on that optimality is a complex characteristic required to achieve proper balance between multiple, often competing, objectives. As we begin to unravel life's intricate complexities, we realize that that living systems share similar structural and dynamic characteristics; hence much can be learned about biological complexity from engineered systems. In this article, we draw analogies between concepts in process systems engineering and conceptual models of health and disease; establish connections between these concepts and physiologic modeling; and describe how these mirror onto the physiological counterparts of engineered systems. PMID:25506103

  14. Broad attention to multiple individual objects may facilitate change detection with complex auditory scenes.

    PubMed

    Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S

    2016-11-01

    Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Stereo vision tracking of multiple objects in complex indoor environments.

    PubMed

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  16. Identification and its vicissitudes.

    PubMed

    Etchegoyen, R H

    1985-01-01

    This paper attempts to understand the vicissitudes of identification within the co-ordinates of narcissism and the object relation. Firstly the dialectic pair primary identification/secondary identification are studied, and primary narcissism is suggested as the hypothesis which best explains them. The complex identification processes in the primary scene are considered next and the importance of the introjection of the oedipal parents for the formation of the superego is underlined. The importance of the structuring function of the introjection and projection mechanisms becomes embodied in the concept of projective identification, which comes to question the postulate of primary narcissism. The theory of projective-introjective identification is an extremely powerful instrument for explaining phenomena, however it obliges one to accept that the first introjections are radically different from the others. They have nothing to do with mourning but rather with primitive mechanisms which question the subject/object polarity and, so this author believes, spring basically from envy. Lastly, it is maintained that envy and libido are factors of a dialectic from which the object relation and the earliest processes of identification, previous to the Oedipus complex, proceed at one and the same time.

  17. Thai Norms for Name, Image, and Category Agreement, Object Familiarity, Visual Complexity, Manipulability, and Age of Acquisition for 480 Color Photographic Objects

    ERIC Educational Resources Information Center

    Clarke, A. J. Benjamin; Ludington, Jason D.

    2018-01-01

    Normative databases containing psycholinguistic variables are commonly used to aid stimulus selection for investigations into language and other cognitive processes. Norms exist for many languages, but not for Thai. The aim of the present research, therefore, was to obtain Thai normative data for the BOSS, a set of 480 high resolution color…

  18. Haptically Guided Grasping. fMRI Shows Right-Hemisphere Parietal Stimulus Encoding, and Bilateral Dorso-Ventral Parietal Gradients of Object- and Action-Related Processing during Grasp Execution

    PubMed Central

    Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory

    2016-01-01

    The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality. PMID:26779002

  19. Haptically Guided Grasping. fMRI Shows Right-Hemisphere Parietal Stimulus Encoding, and Bilateral Dorso-Ventral Parietal Gradients of Object- and Action-Related Processing during Grasp Execution.

    PubMed

    Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory

    2015-01-01

    The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.

  20. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  1. Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension

    PubMed Central

    Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.

    2017-01-01

    Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748

  2. Multi-Objective Mission Route Planning Using Particle Swarm Optimization

    DTIC Science & Technology

    2002-03-01

    solutions to complex problems using particles that interact with each other. Both Particle Swarm Optimization (PSO) and the Ant System (AS) have been...EXPERIMENTAL DESING PROCESS..............................................................55 5.1. Introduction...46 18. Phenotype level particle interaction

  3. Method for Evaluating Information to Solve Problems of Control, Monitoring and Diagnostics

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. A.; Dobrynina, N. V.

    2017-06-01

    The article describes a method for evaluating information to solve problems of control, monitoring and diagnostics. It is necessary for reducing the dimensionality of informational indicators of situations, bringing them to relative units, for calculating generalized information indicators on their basis, ranking them by characteristic levels, for calculating the efficiency criterion of a system functioning in real time. The design of information evaluation system has been developed on its basis that allows analyzing, processing and assessing information about the object. Such object can be a complex technical, economic and social system. The method and the based system thereof can find a wide application in the field of analysis, processing and evaluation of information on the functioning of the systems, regardless of their purpose, goals, tasks and complexity. For example, they can be used to assess the innovation capacities of industrial enterprises and management decisions.

  4. Development and Application of Learning Materials to Help Students Understand Ten Statements Describing the Nature of Scientific Observation

    ERIC Educational Resources Information Center

    Kim, Sangsoo; Park, Jongwon

    2018-01-01

    Observing scientific events or objects is a complex process that occurs through the interaction between the observer's knowledge or expectations, the surrounding context, physiological features of the human senses, scientific inquiry processes, and the use of observational instruments. Scientific observation has various features specific to this…

  5. [Japanese learners' processing time for reading English relative clauses analyzed in relation to their English listening proficiency].

    PubMed

    Oyama, Yoshinori

    2011-06-01

    The present study examined Japanese university students' processing time for English subject and object relative clauses in relation to their English listening proficiency. In Analysis 1, the relation between English listening proficiency and reading span test scores was analyzed. The results showed that the high and low listening comprehension groups' reading span test scores do not differ. Analysis 2 investigated English listening proficiency and processing time for sentences with subject and object relative clauses. The results showed that reading the relative clause ending and the main verb section of a sentence with an object relative clause (such as "attacked" and "admitted" in the sentence "The reporter that the senator attacked admitted the error") takes less time for learners with high English listening scores than for learners with low English listening scores. In Analysis 3, English listening proficiency and comprehension accuracy for sentences with subject and object relative clauses were examined. The results showed no significant difference in comprehension accuracy between the high and low listening-comprehension groups. These results indicate that processing time for English relative clauses is related to the cognitive processes involved in listening comprehension, which requires immediate processing of syntactically complex audio information.

  6. Some single-machine scheduling problems with learning effects and two competing agents.

    PubMed

    Li, Hongjie; Li, Zeyuan; Yin, Yunqiang

    2014-01-01

    This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.

  7. Object as a model of intelligent robot in the virtual workspace

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The results can be further used during designing of the complete manufacturing system, which takes into account all the involved machines and has the form of an object-oriented model.

  8. Integration of the Gene Ontology into an object-oriented architecture.

    PubMed

    Shegogue, Daniel; Zheng, W Jim

    2005-05-10

    To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes.

  9. Integration of the Gene Ontology into an object-oriented architecture

    PubMed Central

    Shegogue, Daniel; Zheng, W Jim

    2005-01-01

    Background To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Results Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). Conclusion We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes. PMID:15885145

  10. Ventral-stream-like shape representation: from pixel intensity values to trainable object-selective COSFIRE models

    PubMed Central

    Azzopardi, George; Petkov, Nicolai

    2014-01-01

    The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068

  11. Context predicts word order processing in Broca's region.

    PubMed

    Kristensen, Line Burholt; Engberg-Pedersen, Elisabeth; Wallentin, Mikkel

    2014-12-01

    The function of the left inferior frontal gyrus (L-IFG) is highly disputed. A number of language processing studies have linked the region to the processing of syntactical structure. Still, there is little agreement when it comes to defining why linguistic structures differ in their effects on the L-IFG. In a number of languages, the processing of object-initial sentences affects the L-IFG more than the processing of subject-initial ones, but frequency and distribution differences may act as confounding variables. Syntactically complex structures (like the object-initial construction in Danish) are often less frequent and only viable in certain contexts. With this confound in mind, the L-IFG activation may be sensitive to other variables than a syntax manipulation on its own. The present fMRI study investigates the effect of a pragmatically appropriate context on the processing of subject-initial and object-initial clauses with the IFG as our ROI. We find that Danish object-initial clauses yield a higher BOLD response in L-IFG, but we also find an interaction between appropriateness of context and word order. This interaction overlaps with traditional syntax areas in the IFG. For object-initial clauses, the effect of an appropriate context is bigger than for subject-initial clauses. This result is supported by an acceptability study that shows that, given appropriate contexts, object-initial clauses are considered more appropriate than subject-initial clauses. The increased L-IFG activation for processing object-initial clauses without a supportive context may be interpreted as reflecting either reinterpretation or the recipients' failure to correctly predict word order from contextual cues.

  12. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  13. Multicriteria decision analysis: Overview and implications for environmental decision making

    USGS Publications Warehouse

    Hermans, Caroline M.; Erickson, Jon D.; Erickson, Jon D.; Messner, Frank; Ring, Irene

    2007-01-01

    Environmental decision making involving multiple stakeholders can benefit from the use of a formal process to structure stakeholder interactions, leading to more successful outcomes than traditional discursive decision processes. There are many tools available to handle complex decision making. Here we illustrate the use of a multicriteria decision analysis (MCDA) outranking tool (PROMETHEE) to facilitate decision making at the watershed scale, involving multiple stakeholders, multiple criteria, and multiple objectives. We compare various MCDA methods and their theoretical underpinnings, examining methods that most realistically model complex decision problems in ways that are understandable and transparent to stakeholders.

  14. Computer simulation of functioning of elements of security systems

    NASA Astrophysics Data System (ADS)

    Godovykh, A. V.; Stepanov, B. P.; Sheveleva, A. A.

    2017-01-01

    The article is devoted to issues of development of the informational complex for simulation of functioning of the security system elements. The complex is described from the point of view of main objectives, a design concept and an interrelation of main elements. The proposed conception of the computer simulation provides an opportunity to simulate processes of security system work for training security staff during normal and emergency operation.

  15. Climate Change Impacts and Vulnerability Assessment in Industrial Complexes

    NASA Astrophysics Data System (ADS)

    Lee, H. J.; Lee, D. K.

    2016-12-01

    Climate change has recently caused frequent natural disasters, such as floods, droughts, and heat waves. Such disasters have also increased industrial damages. We must establish climate change adaptation policies to reduce the industrial damages. It is important to make accurate vulnerability assessment to establish climate change adaptation policies. Thus, this study aims at establishing a new index to assess vulnerability level in industrial complexes. Most vulnerability indices have been developed with subjective approaches, such as the Delphi survey and the Analytic Hierarchy Process(AHP). The subjective approaches rely on the knowledge of a few experts, which provokes the lack of the reliability of the indices. To alleviate the problem, we have designed a vulnerability index incorporating objective approaches. We have investigated 42 industrial complex sites in Republic of Korea (ROK). To calculate weights of variables, we used entropy method as an objective method integrating the Delphi survey as a subjective method. Finally, we found our method integrating both subjective method and objective method could generate result. The integration of the entropy method enables us to assess the vulnerability objectively. Our method will be useful to establish climate change adaptation policies by reducing the uncertainties of the methods based on the subjective approaches.

  16. Colour expectations during object perception are associated with early and late modulations of electrophysiological activity.

    PubMed

    Stojanoski, Bobby Boge; Niemeier, Matthias

    2015-10-01

    It is well known that visual expectation and attention modulate object perception. Yet, the mechanisms underlying these top-down influences are not completely understood. Event-related potentials (ERPs) indicate late contributions of expectations to object processing around the P2 or N2. This is true independent of whether people expect objects (vs. no objects) or specific shapes, hence when expectations pertain to complex visual features. However, object perception can also benefit from expecting colour information, which can facilitate figure/ground segregation. Studies on attention to colour show attention-sensitive modulations of the P1, but are limited to simple transient detection paradigms. The aim of the current study was to examine whether expecting simple features (colour information) during challenging object perception tasks produce early or late ERP modulations. We told participants to expect an object defined by predominantly black or white lines that were embedded in random arrays of distractor lines and then asked them to report the object's shape. Performance was better when colour expectations were met. ERPs revealed early and late phases of modulation. An early modulation at the P1/N1 transition arguably reflected earlier stages of object processing. Later modulations, at the P3, could be consistent with decisional processes. These results provide novel insights into feature-specific contributions of visual expectations to object perception.

  17. Hot melt extrusion of ion-exchange resin for taste masking.

    PubMed

    Tan, David Cheng Thiam; Ong, Jeremy Jianming; Gokhale, Rajeev; Heng, Paul Wan Sia

    2018-05-30

    Taste masking is important for some unpleasant tasting bioactives in oral dosage forms. Among many methods available for taste-masking, use of ion-exchange resin (IER) holds promise. IER combined with hot melt extrusion (HME) may offer additional advantages over solvent methods. IER provides taste masking by complexing with the drug ions and preventing drug dissolution in the mouth. Drug-IER complexation approaches described in literatures are mainly based either on batch processing or column eluting. These methods of drug-IER complexation have obvious limitations such as high solvent volume requirements, multiprocessing steps and extended processing time. Thus, the objective of this study was to develop a single-step, solvent-free, continuous HME process for complexation of drug-IER. The screening study evaluated drug to IER ratio, types of IER and drug complexation methods. In the screening study, a potassium salt of a weakly acidic carboxylate-based cationic IER was found suitable for the HME method. Thereafter, optimization study was conducted by varying HME process parameters such as screw speed, extrusion temperature and drug to IER ratio. It was observed that extrusion temperature and drug to IER ratio are imperative in drug-IER complexation through HME. In summary, this study has established the feasibility of a continuous complexation method for drug to IER using HME for taste masking. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Development of a support tool for complex decision-making in the provision of rural maternity care.

    PubMed

    Hearns, Glen; Klein, Michael C; Trousdale, William; Ulrich, Catherine; Butcher, David; Miewald, Christiana; Lindstrom, Ronald; Eftekhary, Sahba; Rosinski, Jessica; Gómez-Ramírez, Oralia; Procyk, Andrea

    2010-02-01

    Decisions in the organization of safe and effective rural maternity care are complex, difficult, value laden and fraught with uncertainty, and must often be based on imperfect information. Decision analysis offers tools for addressing these complexities in order to help decision-makers determine the best use of resources and to appreciate the downstream effects of their decisions. To develop a maternity care decision-making tool for the British Columbia Northern Health Authority (NH) for use in low birth volume settings. Based on interviews with community members, providers, recipients and decision-makers, and employing a formal decision analysis approach, we sought to clarify the influences affecting rural maternity care and develop a process to generate a set of value-focused objectives for use in designing and evaluating rural maternity care alternatives. Four low-volume communities with variable resources (with and without on-site births, with or without caesarean section capability) were chosen. Physicians (20), nurses (18), midwives and maternity support service providers (4), local business leaders, economic development officials and elected officials (12), First Nations (women [pregnant and non-pregnant], chiefs and band members) (40), social workers (3), pregnant women (2) and NH decision-makers/administrators (17). We developed a Decision Support Manual to assist with assessing community needs and values, context for decision-making, capacity of the health authority or healthcare providers, identification of key objectives for decision-making, developing alternatives for care, and a process for making trade-offs and balancing multiple objectives. The manual was deemed an effective tool for the purpose by the client, NH. Beyond assisting the decision-making process itself, the methodology provides a transparent communication tool to assist in making difficult decisions. While the manual was specifically intended to deal with rural maternity issues, the NH decision-makers feel the method can be easily adapted to assist decision-making in other contexts in medicine where there are conflicting objectives, values and opinions. Decisions on the location of new facilities or infrastructure, or enhancing or altering services such as surgical or palliative care, would be examples of complex decisions that might benefit from this methodology.

  19. Minimization of dependency length in written English.

    PubMed

    Temperley, David

    2007-11-01

    Gibson's Dependency Locality Theory (DLT) [Gibson, E. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68, 1-76; Gibson, E. 2000. The dependency locality theory: A distance-based theory of linguistic complexity. In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, Language, Brain (pp. 95-126). Cambridge, MA: MIT Press.] proposes that the processing complexity of a sentence is related to the length of its syntactic dependencies: longer dependencies are more difficult to process. The DLT is supported by a variety of phenomena in language comprehension. This raises the question: Does language production reflect a preference for shorter dependencies as well? I examine this question in a corpus study of written English, using the Wall Street Journal portion of the Penn Treebank. The DLT makes a number of predictions regarding the length of constituents in different contexts; these predictions were tested in a series of statistical tests. A number of findings support the theory: the greater length of subject noun phrases in inverted versus uninverted quotation constructions, the greater length of direct-object versus subject NPs, the greater length of postmodifying versus premodifying adverbial clauses, the greater length of relative-clause subjects within direct-object NPs versus subject NPs, the tendency towards "short-long" ordering of postmodifying adjuncts and coordinated conjuncts, and the shorter length of subject NPs (but not direct-object NPs) in clauses with premodifying adjuncts versus those without.

  20. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  1. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  2. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  3. Object-oriented Persistent Homology

    PubMed Central

    Wang, Bao; Wei, Guo-Wei

    2015-01-01

    Persistent homology provides a new approach for the topological simplification of big data via measuring the life time of intrinsic topological features in a filtration process and has found its success in scientific and engineering applications. However, such a success is essentially limited to qualitative data classification and analysis. Indeed, persistent homology has rarely been employed for quantitative modeling and prediction. Additionally, the present persistent homology is a passive tool, rather than a proactive technique, for classification and analysis. In this work, we outline a general protocol to construct object-oriented persistent homology methods. By means of differential geometry theory of surfaces, we construct an objective functional, namely, a surface free energy defined on the data of interest. The minimization of the objective functional leads to a Laplace-Beltrami operator which generates a multiscale representation of the initial data and offers an objective oriented filtration process. The resulting differential geometry based object-oriented persistent homology is able to preserve desirable geometric features in the evolutionary filtration and enhances the corresponding topological persistence. The cubical complex based homology algorithm is employed in the present work to be compatible with the Cartesian representation of the Laplace-Beltrami flow. The proposed Laplace-Beltrami flow based persistent homology method is extensively validated. The consistence between Laplace-Beltrami flow based filtration and Euclidean distance based filtration is confirmed on the Vietoris-Rips complex for a large amount of numerical tests. The convergence and reliability of the present Laplace-Beltrami flow based cubical complex filtration approach are analyzed over various spatial and temporal mesh sizes. The Laplace-Beltrami flow based persistent homology approach is utilized to study the intrinsic topology of proteins and fullerene molecules. Based on a quantitative model which correlates the topological persistence of fullerene central cavity with the total curvature energy of the fullerene structure, the proposed method is used for the prediction of fullerene isomer stability. The efficiency and robustness of the present method are verified by more than 500 fullerene molecules. It is shown that the proposed persistent homology based quantitative model offers good predictions of total curvature energies for ten types of fullerene isomers. The present work offers the first example to design object-oriented persistent homology to enhance or preserve desirable features in the original data during the filtration process and then automatically detect or extract the corresponding topological traits from the data. PMID:26705370

  4. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  5. Frame sequences analysis technique of linear objects movement

    NASA Astrophysics Data System (ADS)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  6. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  7. The syntactic complexity of Russian relative clauses

    PubMed Central

    Fedorenko, Evelina; Gibson, Edward

    2012-01-01

    Although syntactic complexity has been investigated across dozens of studies, the available data still greatly underdetermine relevant theories of processing difficulty. Memory-based and expectation-based theories make opposite predictions regarding fine-grained time course of processing difficulty in syntactically constrained contexts, and each class of theory receives support from results on some constructions in some languages. Here we report four self-paced reading experiments on the online comprehension of Russian relative clauses together with related corpus studies, taking advantage of Russian’s flexible word order to disentangle predictions of competing theories. We find support for key predictions of memory-based theories in reading times at RC verbs, and for key predictions of expectation-based theories in processing difficulty at RC-initial accusative noun phrase (NP) objects, which corpus data suggest should be highly unexpected. These results suggest that a complete theory of syntactic complexity must integrate insights from both expectation-based and memory-based theories. PMID:24711687

  8. Effects of in-sewer processes: a stochastic model approach.

    PubMed

    Vollertsen, J; Nielsen, A H; Yang, W; Hvitved-Jacobsen, T

    2005-01-01

    Transformations of organic matter, nitrogen and sulfur in sewers can be simulated taking into account the relevant transformation and transport processes. One objective of such simulation is the assessment and management of hydrogen sulfide formation and corrosion. Sulfide is formed in the biofilms and sediments of the water phase, but corrosion occurs on the moist surfaces of the sewer gas phase. Consequently, both phases and the transport of volatile substances between these phases must be included. Furthermore, wastewater composition and transformations in sewers are complex and subject to high, natural variability. This paper presents the latest developments of the WATS model concept, allowing integrated aerobic, anoxic and anaerobic simulation of the water phase and of gas phase processes. The resulting model is complex and with high parameter variability. An example applying stochastic modeling shows how this complexity and variability can be taken into account.

  9. Binding of small molecules at interface of protein-protein complex - A newer approach to rational drug design.

    PubMed

    Gurung, A B; Bhattacharjee, A; Ajmal Ali, M; Al-Hemaid, F; Lee, Joongku

    2017-02-01

    Protein-protein interaction is a vital process which drives many important physiological processes in the cell and has also been implicated in several diseases. Though the protein-protein interaction network is quite complex but understanding its interacting partners using both in silico as well as molecular biology techniques can provide better insights for targeting such interactions. Targeting protein-protein interaction with small molecules is a challenging task because of druggability issues. Nevertheless, several studies on the kinetics as well as thermodynamic properties of protein-protein interactions have immensely contributed toward better understanding of the affinity of these complexes. But, more recent studies on hot spots and interface residues have opened up new avenues in the drug discovery process. This approach has been used in the design of hot spot based modulators targeting protein-protein interaction with the objective of normalizing such interactions.

  10. Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process

    NASA Astrophysics Data System (ADS)

    Migawa, Klaudiusz

    2012-12-01

    The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.

  11. Partners in Learning: A Child-Centered Approach to Teaching the Social Studies.

    ERIC Educational Resources Information Center

    Hopkins, Lee Bennett; Arenstein, Misha

    The underlying objective of this book is to review past and present curriculum patterns to emphasize the changes being carried out today so that preservice, beginning, and experienced teachers may glean some new ideas about involving the child in the process of learning. All of the social disciplines help explain the complex process of man's…

  12. A case of complex regional pain syndrome with agnosia for object orientation.

    PubMed

    Robinson, Gail; Cohen, Helen; Goebel, Andreas

    2011-07-01

    This systematic investigation of the neurocognitive correlates of complex regional pain syndrome (CRPS) in a single case also reports agnosia for object orientation in the context of persistent CRPS. We report a patient (JW) with severe long-standing CRPS who had no difficulty identifying and naming line drawings of objects presented in 1 of 4 cardinal orientations. In contrast, he was extremely poor at reorienting these objects into the correct upright orientation and in judging whether an object was upright or not. Moreover, JW made orientation errors when copying drawings of objects, and he also showed features of mirror reversal in writing single words and reading single letters. The findings are discussed in relation to accounts of visual processing. Agnosia for object orientation is the term for impaired knowledge of an object's orientation despite good recognition and naming of the same misoriented object. This defect has previously only been reported in patients with major structural brain lesions. The neuroanatomical correlates are discussed. The patient had no structural brain lesion, raising the possibility that nonstructural reorganisation of cortical networks may be responsible for his deficits. Other patients with CRPS may have related neurocognitive defects. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  13. 3D printed optical phantoms and deep tissue imaging for in vivo applications including oral surgery

    NASA Astrophysics Data System (ADS)

    Bentz, Brian Z.; Costas, Alfonso; Gaind, Vaibhav; Garcia, Jose M.; Webb, Kevin J.

    2017-03-01

    Progress in developing optical imaging for biomedical applications requires customizable and often complex objects known as "phantoms" for testing, evaluation, and calibration. This work demonstrates that 3D printing is an ideal method for fabricating such objects, allowing intricate inhomogeneities to be placed at exact locations in complex or anatomically realistic geometries, a process that is difficult or impossible using molds. We show printed mouse phantoms we have fabricated for developing deep tissue fluorescence imaging methods, and measurements of both their optical and mechanical properties. Additionally, we present a printed phantom of the human mouth that we use to develop an artery localization method to assist in oral surgery.

  14. NASCAP user's manual, 1978

    NASA Technical Reports Server (NTRS)

    Cassidy, J. J., III

    1978-01-01

    NASCAP simulates the charging process for a complex object in either tenuous plasma (geosynchronous orbit) or ground test (electron gun source) environment. Program control words, the structure of user input files, and various user options available are described in this computer programmer's user manual.

  15. Creating an Overall Environmental Quality Index: Assessing Available Data

    EPA Science Inventory

    Background and Objectives: The interaction between environmental insults and human health is a complex process. Environmental exposures tend to cluster and disamenities such as landfills or industrial plants are often located in neighborhoods with high a percentage of minority a...

  16. Management Information Systems.

    ERIC Educational Resources Information Center

    Finlayson, Jean, Ed.

    1989-01-01

    This collection of papers addresses key questions facing college managers and others choosing, introducing, and living with big, complex computer-based systems. "What Use the User Requirement?" (Tony Coles) stresses the importance of an information strategy driven by corporate objectives, not technology. "Process of Selecting a…

  17. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  18. A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Basdekas, L.; Stewart, N.; Triana, E.

    2013-12-01

    Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.

  19. A negotiation methodology and its application to cogeneration planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S.M.; Liu, C.C.; Luu, S.

    Power system planning has become a complex process in utilities today. This paper presents a methodology for integrated planning with multiple objectives. The methodology uses a graphical representation (Goal-Decision Network) to capture the planning knowledge. The planning process is viewed as a negotiation process that applies three negotiation operators to search for beneficial decisions in a GDN. Also, the negotiation framework is applied to the problem of planning for cogeneration interconnection. The simulation results are presented to illustrate the cogeneration planning process.

  20. Remediation management of complex sites using an adaptive site management approach.

    PubMed

    Price, John; Spreng, Carl; Hawley, Elisabeth L; Deeb, Rula

    2017-12-15

    Complex sites require a disproportionate amount of resources for environmental remediation and long timeframes to achieve remediation objectives, due to their complex geologic conditions, hydrogeologic conditions, geochemical conditions, contaminant-related conditions, large scale of contamination, and/or non-technical challenges. A recent team of state and federal environmental regulators, federal agency representatives, industry experts, community stakeholders, and academia worked together as an Interstate Technology & Regulatory Council (ITRC) team to compile resources and create new guidance on the remediation management of complex sites. This article summarizes the ITRC team's recommended process for addressing complex sites through an adaptive site management approach. The team provided guidance for site managers and other stakeholders to evaluate site complexities and determine site remediation potential, i.e., whether an adaptive site management approach is warranted. Adaptive site management was described as a comprehensive, flexible approach to iteratively evaluate and adjust the remedial strategy in response to remedy performance. Key aspects of adaptive site management were described, including tools for revising and updating the conceptual site model (CSM), the importance of setting interim objectives to define short-term milestones on the journey to achieving site objectives, establishing a performance model and metrics to evaluate progress towards meeting interim objectives, and comparing actual with predicted progress during scheduled periodic evaluations, and establishing decision criteria for when and how to adapt/modify/revise the remedial strategy in response to remedy performance. Key findings will be published in an ITRC Technical and Regulatory guidance document in 2017 and free training webinars will be conducted. More information is available at www.itrc-web.org. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Conceptual Modeling in Systems Biology Fosters Empirical Findings: The mRNA Lifecycle

    PubMed Central

    Dori, Dov; Choder, Mordechai

    2007-01-01

    One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM), a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency. PMID:17849002

  2. North Carolina Biomolecular Engineering and Materials Applications Center (NC-BEMAC).

    DTIC Science & Technology

    1987-12-29

    enzyme has been replaced with cobalt(II). A further objective was to investigate Co2 activation by low molecular weight transition metal complexes as...Characterization of Low Molecular Weight Metal Complexes as Potential Models for IBio-Catalytic Processes. A number of transit ion met~~il oom~pi cxe; hive...binding, the enzyme suffered loss of activity during radiation polymerization. When covalent binding was u:sed it was necessary to introduce suitably

  3. Temporal and Location Based RFID Event Data Management and Processing

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Liu, Peiya

    Advance of sensor and RFID technology provides significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management. RFID data are temporal and history oriented, multi-dimensional, and carrying implicit semantics. Moreover, RFID applications are heterogeneous. RFID data management or data warehouse systems need to support generic and expressive data modeling for tracking and monitoring physical objects, and provide automated data interpretation and processing. We develop a powerful temporal and location oriented data model for modeling and queryingRFID data, and a declarative event and rule based framework for automated complex RFID event processing. The approach is general and can be easily adapted for different RFID-enabled applications, thus significantly reduces the cost of RFID data integration.

  4. Fluent, fast, and frugal? A formal model evaluation of the interplay between memory, fluency, and comparative judgments.

    PubMed

    Hilbig, Benjamin E; Erdfelder, Edgar; Pohl, Rüdiger F

    2011-07-01

    A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency-that is, the speed with which objects are recognized-will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely untested due to methodological difficulties. To overcome the latter, we propose a measurement model from the class of multinomial processing tree models that can estimate true single-cue reliance on recognition and retrieval fluency. We applied this model to aggregate and individual data from a probabilistic inference experiment and considered both goodness of fit and model complexity to evaluate different hypotheses. The results were relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing comparative judgments concerning recognized objects. These findings are discussed in light of a broader theoretical view on the interplay of memory and judgment processes.

  5. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  6. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  7. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Technical devices of powered roof support for the top coal caving as automation objects

    NASA Astrophysics Data System (ADS)

    Nikitenko, M. S.; Kizilov, S. A.; Nikolaev, P. I.; Kuznetsov, I. S.

    2018-05-01

    In the paper technical devices for the top coal caving as automation objects in the composition of the longwall mining complex (LTCC) are considered. The proposed concept for automation of the top coal caving process allows caving efficiency to be ensured, coal dilution to be prevented, conveyor overloading to be prevented, the shearer service personnel to be unloaded, the influence of the “human factor” to be reduced.

  9. Forest, Trees, Dynamics: Results from a Novel Wisconsin Card Sorting Test Variant Protocol for Studying Global-Local Attention and Complex Cognitive Processes

    PubMed Central

    Cowley, Benjamin; Lukander, Kristian

    2016-01-01

    Background: Recognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions. Motivation: There have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory. Methods: We develop and test a novel protocol for global-local dissociation, with task structure including phases of divided (“rule search”) and selective (“rule found”) attention, based on the Wisconsin Card Sorting Task (WCST). We test it in a laboratory study with 25 participants, and report on behavior measures (physiological data was also gathered, but not reported here). We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity. Results: We report behavioral results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol. Contribution: We contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work. PMID:26941689

  10. Conceptual Model-Based Systems Biology: Mapping Knowledge and Discovering Gaps in the mRNA Transcription Cycle

    PubMed Central

    Somekh, Judith; Choder, Mordechai; Dori, Dov

    2012-01-01

    We propose a Conceptual Model-based Systems Biology framework for qualitative modeling, executing, and eliciting knowledge gaps in molecular biology systems. The framework is an adaptation of Object-Process Methodology (OPM), a graphical and textual executable modeling language. OPM enables concurrent representation of the system's structure—the objects that comprise the system, and behavior—how processes transform objects over time. Applying a top-down approach of recursively zooming into processes, we model a case in point—the mRNA transcription cycle. Starting with this high level cell function, we model increasingly detailed processes along with participating objects. Our modeling approach is capable of modeling molecular processes such as complex formation, localization and trafficking, molecular binding, enzymatic stimulation, and environmental intervention. At the lowest level, similar to the Gene Ontology, all biological processes boil down to three basic molecular functions: catalysis, binding/dissociation, and transporting. During modeling and execution of the mRNA transcription model, we discovered knowledge gaps, which we present and classify into various types. We also show how model execution enhances a coherent model construction. Identification and pinpointing knowledge gaps is an important feature of the framework, as it suggests where research should focus and whether conjectures about uncertain mechanisms fit into the already verified model. PMID:23308089

  11. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos

    2013-09-05

    The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorr, Kent A.; Ostrom, Michael J.; Freeman-Pollard, Jhivaun R.

    CH2M Hill Plateau Remediation Company (CHPRC) designed, constructed, commissioned, and began operation of the largest groundwater pump and treatment facility in the U.S. Department of Energy’s (DOE) nationwide complex. This one-of-a-kind groundwater pump and treatment facility, located at the Hanford Nuclear Reservation Site (Hanford Site) in Washington State, was built to an accelerated schedule with American Recovery and Reinvestment Act (ARRA) funds. There were many contractual, technical, configuration management, quality, safety, and Leadership in Energy and Environmental Design (LEED) challenges associated with the design, procurement, construction, and commissioning of this $95 million, 52,000 ft groundwater pump and treatment facility tomore » meet DOE’s mission objective of treating contaminated groundwater at the Hanford Site with a new facility by June 28, 2012. The project team’s successful integration of the project’s core values and green energy technology throughout design, procurement, construction, and start-up of this complex, first-of-its-kind Bio Process facility resulted in successful achievement of DOE’s mission objective, as well as attainment of LEED GOLD certification, which makes this Bio Process facility the first non-administrative building in the DOE Office of Environmental Management complex to earn such an award.« less

  13. Structural model of control system for hydraulic stepper motor complex

    NASA Astrophysics Data System (ADS)

    Obukhov, A. D.; Dedov, D. L.; Kolodin, A. N.

    2018-03-01

    The article considers the problem of developing a structural model of the control system for a hydraulic stepper drive complex. A comparative analysis of stepper drives and assessment of the applicability of HSM for solving problems, requiring accurate displacement in space with subsequent positioning of the object, are carried out. The presented structural model of the automated control system of the multi-spindle complex of hydraulic stepper drives reflects the main components of the system, as well as the process of its control based on the control signals transfer to the solenoid valves by the controller. The models and methods described in the article can be used to formalize the control process in technical systems based on the application hydraulic stepper drives and allow switching from mechanical control to automated control.

  14. How children aged 2;6 tailor verbal expressions to interlocutor informational needs.

    PubMed

    Abbot-Smith, Kirsten; Nurmsoo, Erika; Croll, Rebecca; Ferguson, Heather; Forrester, Michael

    2016-11-01

    Although preschoolers are pervasively underinformative in their actual usage of verbal reference, a number of studies have shown that they nonetheless demonstrate sensitivity to listener informational needs, at least when environmental cues to this are obvious. We investigated two issues. The first concerned the types of visual cues to interlocutor informational needs which children aged 2;6 can process whilst producing complex referring expressions. The second was whether performance in experimental tasks related to naturalistic conversational proficiency. We found that 2;6-year-olds used fewer complex expressions when the objects were dissimilar compared to highly similar objects, indicating that they tailor their verbal expressions to the informational needs of another person, even when the cue to the informational need is relatively opaque. We also found a correlation between conversational skills as rated by the parents and the degree to which 2;6-year-olds could learn from feedback to produce complex referring expressions.

  15. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  16. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  17. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    PubMed Central

    Schendan, Haline E.; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701

  18. A new application for food customization with additive manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Serenó, L.; Vallicrosa, G.; Delgado, J.; Ciurana, J.

    2012-04-01

    Additive Manufacturing (AM) technologies have emerged as a freeform approach capable of producing almost any complete three dimensional (3D) objects from computer-aided design (CAD) data by successively adding material layer by layer. Despite the broad range of possibilities, commercial AM technologies remain complex and expensive, making them suitable only for niche applications. The developments of the Fab@Home system as an open AM technology discovered a new range of possibilities of processing different materials such as edible products. The main objective of this work is to analyze and optimize the manufacturing capacity of this system when producing 3D edible objects. A new heated syringe deposition tool was developed and several process parameters were optimized to adapt this technology to consumers' needs. The results revealed in this study show the potential of this system to produce customized edible objects without qualified personnel knowledge, therefore saving manufacturing costs compared to traditional technologies.

  19. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  20. OPTICAL INFORMATION PROCESSING: Synthesis of an object recognition system based on the profile of the envelope of a laser pulse in pulsed lidars

    NASA Astrophysics Data System (ADS)

    Buryi, E. V.

    1998-05-01

    The main problems in the synthesis of an object recognition system, based on the principles of operation of neuron networks, are considered. Advantages are demonstrated of a hierarchical structure of the recognition algorithm. The use of reading of the amplitude spectrum of signals as information tags is justified and a method is developed for determination of the dimensionality of the tag space. Methods are suggested for ensuring the stability of object recognition in the optical range. It is concluded that it should be possible to recognise perspectives of complex objects.

  1. The 'F-complex' and MMN tap different aspects of deviance.

    PubMed

    Laufer, Ilan; Pratt, Hillel

    2005-02-01

    To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.

  2. EEG signatures accompanying auditory figure-ground segregation

    PubMed Central

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István

    2017-01-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185

  3. Renewable Energy Opportunity Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hancock, Ed; Mas, Carl

    1998-11-13

    Presently, the US EPA is constructing a new complex at Research Triangle Park, North Carolina to consolidate its research operations in the Raleigh-Durham area. The National Computer Center (NCC) is currently in the design process and is planned for construction as part of this complex. Implementation of the new technologies can be planned as part of the normal construction process, and full credit for elimination of the conventional technologies can be taken. Several renewable technologies are specified in the current plans for the buildings. The objective of this study is to identify measures that are likely to be both technicallymore » and economically feasible.« less

  4. Developing Illustrative Descriptors of Aspects of Mediation for the Common European Framework of Reference (CEFR): A Council of Europe Project

    ERIC Educational Resources Information Center

    North, Brian; Piccardo, Enrica

    2016-01-01

    The notion of mediation has been the object of growing interest in second language education in recent years. The increasing awareness of the complex nature of the process of learning--and teaching--stretches our collective reflection towards less explored areas. In mediation, the immediate focus is on the role of language in processes like…

  5. Action and object word writing in a case of bilingual aphasia.

    PubMed

    Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil

    2012-01-01

    We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.

  6. Using Risk Assessment Methodologies to Meet Management Objectives

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2015-01-01

    Corporate and program objectives focus on desired performance and results. ?Management decisions that affect how to meet these objectives now involve a complex mix of: technology, safety issues, operations, process considerations, employee considerations, regulatory requirements, financial concerns and legal issues. ?Risk Assessments are a tool for decision makers to understand potential consequences and be in a position to reduce, mitigate or eliminate costly mistakes or catastrophic failures. Using a risk assessment methodology is only a starting point. ?A risk assessment program provides management with important input in the decision making process. ?A pro-active organization looks to the future to avoid problems, a reactive organization can be blindsided by risks that could have been avoided. ?You get out what you put in, how useful your program is will be up to the individual organization.

  7. Fabry-Perot confocal resonator optical associative memory

    NASA Astrophysics Data System (ADS)

    Burns, Thomas J.; Rogers, Steven K.; Vogel, George A.

    1993-03-01

    A unique optical associative memory architecture is presented that combines the optical processing environment of a Fabry-Perot confocal resonator with the dynamic storage and recall properties of volume holograms. The confocal resonator reduces the size and complexity of previous associative memory architectures by folding a large number of discrete optical components into an integrated, compact optical processing environment. Experimental results demonstrate the system is capable of recalling a complete object from memory when presented with partial information about the object. A Fourier optics model of the system's operation shows it implements a spatially continuous version of a discrete, binary Hopfield neural network associative memory.

  8. The premises and promises of trolls in Norwegian biodiversity preservation: on the boundaries between bureaucracy and science.

    PubMed

    Bay-Larsen, Ingrid

    2012-05-01

    This paper examines the perception and implementation of scientific knowledge among Norwegian environmental bureaucrats in the process of preserving biodiversity. Based on interviews with environmental officials and scientists, and document studies, the data reveal a mismatch between the ideal administrative world presented by environmental managers, and the empirical reality of biodiversity vulnerability and preservation. The environmental officials depict a process where their mandate is merely instrumental, where science provides objective descriptions of biodiversity value, and where the spheres of science, policy and administration are strictly separated. Instead of a transparent strategy for handling scientific ambiguities inherent in biodiversity value assessments (such as complexity and uncertainty), and administrative judgments, the paper argues that these boundary objects and areas are perceived as 'trolls' that are ignored and hidden by environmental officials. This strategy appears intuitive and guided by a linear decision making paradigm where boundary objects are considered illegitimate. As a solution to possible obstacles stemming from this institutional vacuum, the article finally discusses the potential of adapting or assimilating the trolls to better meet the challenges of biodiversity preservation. A viable first step might be cross-disciplinary characterisation of complexities and uncertainties of biodiversity assessments. This might help to articulate the binary ontology of value assessments and to better address the critical administrative, political and scientific intersections. These boundary areas must be re-institutionalised by environmental agencies, and cognizant strategies must be devised and implemented for making professional judgment and discretion. Finally, it may amount to a more honest stance on conservation, where the inherent complexities to biodiversity preservation may be managed as complexities, and not as trolls.

  9. The Premises and Promises of Trolls in Norwegian Biodiversity Preservation. On the Boundaries Between Bureaucracy and Science

    NASA Astrophysics Data System (ADS)

    Bay-Larsen, Ingrid

    2012-05-01

    This paper examines the perception and implementation of scientific knowledge among Norwegian environmental bureaucrats in the process of preserving biodiversity. Based on interviews with environmental officials and scientists, and document studies, the data reveal a mismatch between the ideal administrative world presented by environmental managers, and the empirical reality of biodiversity vulnerability and preservation. The environmental officials depict a process where their mandate is merely instrumental, where science provides objective descriptions of biodiversity value, and where the spheres of science, policy and administration are strictly separated. Instead of a transparent strategy for handling scientific ambiguities inherent in biodiversity value assessments (such as complexity and uncertainty), and administrative judgments, the paper argues that these boundary objects and areas are perceived as `trolls' that are ignored and hidden by environmental officials. This strategy appears intuitive and guided by a linear decision making paradigm where boundary objects are considered illegitimate. As a solution to possible obstacles stemming from this institutional vacuum, the article finally discusses the potential of adapting or assimilating the trolls to better meet the challenges of biodiversity preservation. A viable first step might be cross-disciplinary characterisation of complexities and uncertainties of biodiversity assessments. This might help to articulate the binary ontology of value assessments and to better address the critical administrative, political and scientific intersections. These boundary areas must be re-institutionalised by environmental agencies, and cognizant strategies must be devised and implemented for making professional judgment and discretion. Finally, it may amount to a more honest stance on conservation, where the inherent complexities to biodiversity preservation may be managed as complexities, and not as trolls.

  10. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  11. Modeling energy expenditure in children and adolescents using quantile regression

    USDA-ARS?s Scientific Manuscript database

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...

  12. A matter of tradeoffs: reintroduction as a multiple objective decision

    USGS Publications Warehouse

    Converse, Sarah J.; Moore, Clinton T.; Folk, Martin J.; Runge, Michael C.

    2013-01-01

    Decision making in guidance of reintroduction efforts is made challenging by the substantial scientific uncertainty typically involved. However, a less recognized challenge is that the management objectives are often numerous and complex. Decision makers managing reintroduction efforts are often concerned with more than just how to maximize the probability of reintroduction success from a population perspective. Decision makers are also weighing other concerns such as budget limitations, public support and/or opposition, impacts on the ecosystem, and the need to consider not just a single reintroduction effort, but conservation of the entire species. Multiple objective decision analysis is a powerful tool for formal analysis of such complex decisions. We demonstrate the use of multiple objective decision analysis in the case of the Florida non-migratory whooping crane reintroduction effort. In this case, the State of Florida was considering whether to resume releases of captive-reared crane chicks into the non-migratory whooping crane population in that state. Management objectives under consideration included maximizing the probability of successful population establishment, minimizing costs, maximizing public relations benefits, maximizing the number of birds available for alternative reintroduction efforts, and maximizing learning about the demographic patterns of reintroduced whooping cranes. The State of Florida engaged in a collaborative process with their management partners, first, to evaluate and characterize important uncertainties about system behavior, and next, to formally evaluate the tradeoffs between objectives using the Simple Multi-Attribute Rating Technique (SMART). The recommendation resulting from this process, to continue releases of cranes at a moderate intensity, was adopted by the State of Florida in late 2008. Although continued releases did not receive support from the International Whooping Crane Recovery Team, this approach does provide a template for the formal, transparent consideration of multiple, potentially competing, objectives in reintroduction decision making.

  13. A flexible object-oriented software framework for developing complex multimedia simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P. J.; Dolph, J. E.; Christiansen, J. H.

    Decision makers involved in brownfields redevelopment and long-term stewardship must consider environmental conditions, future-use potential, site ownership, area infrastructure, funding resources, cost recovery, regulations, risk and liability management, community relations, and expected return on investment in a comprehensive and integrated fashion to achieve desired results. Successful brownfields redevelopment requires the ability to assess the impacts of redevelopment options on multiple interrelated aspects of the ecosystem, both natural and societal. Computer-based tools, such as simulation models, databases, and geographical information systems (GISs) can be used to address brownfields planning and project execution. The transparent integration of these tools into a comprehensivemore » and dynamic decision support system would greatly enhance the brownfields assessment process. Such a system needs to be able to adapt to shifting and expanding analytical requirements and contexts. The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-oriented framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application domains. The modeling domain of a specific DIAS-based simulation is determined by (1) software objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. Models and applications used to express dynamic behaviors can be either internal or external to DIAS, including existing legacy models written in various languages (FORTRAN, C, etc.). The flexible design framework of DIAS makes the objects adjustable to the context of the problem without a great deal of recoding. The DIAS Spatial Data Set facility allows parameters to vary spatially depending on the simulation context according to any of a number of 1-D, 2-D, or 3-D topologies. DIAS is also capable of interacting with other GIS packages and can import many standard spatial data formats. DIAS simulation capabilities can also be extended by including societal process models. Models that implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations allow for interaction and feedback among natural and societal processes. The ability to simulate the complex interplay of multimedia processes makes DIAS a promising tool for constructing applications for comprehensive community planning, including the assessment of multiple development and redevelopment scenarios.« less

  14. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  15. Expert systems for superalloy studies

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.; Kaukler, William F.

    1990-01-01

    There are many areas in science and engineering which require knowledge of an extremely complex foundation of experimental results in order to design methodologies for developing new materials or products. Superalloys are an area which fit well into this discussion in the sense that they are complex combinations of elements which exhibit certain characteristics. Obviously the use of superalloys in high performance, high temperature systems such as the Space Shuttle Main Engine is of interest to NASA. The superalloy manufacturing process is complex and the implementation of an expert system within the design process requires some thought as to how and where it should be implemented. A major motivation is to develop a methodology to assist metallurgists in the design of superalloy materials using current expert systems technology. Hydrogen embrittlement is disasterous to rocket engines and the heuristics can be very complex. Attacking this problem as one module in the overall design process represents a significant step forward. In order to describe the objectives of the first phase implementation, the expert system was designated Hydrogen Environment Embrittlement Expert System (HEEES).

  16. A novel vehicle tracking algorithm based on mean shift and active contour model in complex environment

    NASA Astrophysics Data System (ADS)

    Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen

    2017-06-01

    Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.

  17. Development of visuo-haptic transfer for object recognition in typical preschool and school-aged children.

    PubMed

    Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca

    2018-07-01

    Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.

  18. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  19. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  20. Soft systems thinking and social learning for adaptive management.

    PubMed

    Cundill, G; Cumming, G S; Biggs, D; Fabricius, C

    2012-02-01

    The success of adaptive management in conservation has been questioned and the objective-based management paradigm on which it is based has been heavily criticized. Soft systems thinking and social-learning theory expose errors in the assumption that complex systems can be dispassionately managed by objective observers and highlight the fact that conservation is a social process in which objectives are contested and learning is context dependent. We used these insights to rethink adaptive management in a way that focuses on the social processes involved in management and decision making. Our approach to adaptive management is based on the following assumptions: action toward a common goal is an emergent property of complex social relationships; the introduction of new knowledge, alternative values, and new ways of understanding the world can become a stimulating force for learning, creativity, and change; learning is contextual and is fundamentally about practice; and defining the goal to be addressed is continuous and in principle never ends. We believe five key activities are crucial to defining the goal that is to be addressed in an adaptive-management context and to determining the objectives that are desirable and feasible to the participants: situate the problem in its social and ecological context; raise awareness about alternative views of a problem and encourage enquiry and deconstruction of frames of reference; undertake collaborative actions; and reflect on learning. ©2011 Society for Conservation Biology.

  1. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  2. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  3. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials( ).

    PubMed

    van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A

    2005-05-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.

  4. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  5. Shape-Reprogrammable Polymers: Encoding, Erasing, and Re-Encoding (Postprint)

    DTIC Science & Technology

    2014-11-01

    printing , is a layer-by-layer technology for producing 3D objects directly from a digital model. While 3D printing allows the fabrication of increasingly...one linear shape-translation processes often increase rapidly with shape complexity. Additive manufacturing, also called three-dimensional ( 3D

  6. Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans

    ERIC Educational Resources Information Center

    Lencer, Rebekka; Trillenberg, Peter

    2008-01-01

    Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…

  7. Considerations in Change Management Related to Technology

    ERIC Educational Resources Information Center

    Luo, John S.; Hilty, Donald M.; Worley, Linda L.; Yager, Joel

    2006-01-01

    Objective: The authors describe the complexity of social processes for implementing technological change. Once a new technology is available, information about its availability and benefits must be made available to the community of users, with opportunities to try the innovations and find them worthwhile, despite organizational resistances.…

  8. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  9. Development of structural model of adaptive training complex in ergatic systems for professional use

    NASA Astrophysics Data System (ADS)

    Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.

    2018-03-01

    The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.

  10. COBRApy: COnstraints-Based Reconstruction and Analysis for Python.

    PubMed

    Ebrahim, Ali; Lerman, Joshua A; Palsson, Bernhard O; Hyduke, Daniel R

    2013-08-08

    COnstraint-Based Reconstruction and Analysis (COBRA) methods are widely used for genome-scale modeling of metabolic networks in both prokaryotes and eukaryotes. Due to the successes with metabolism, there is an increasing effort to apply COBRA methods to reconstruct and analyze integrated models of cellular processes. The COBRA Toolbox for MATLAB is a leading software package for genome-scale analysis of metabolism; however, it was not designed to elegantly capture the complexity inherent in integrated biological networks and lacks an integration framework for the multiomics data used in systems biology. The openCOBRA Project is a community effort to promote constraints-based research through the distribution of freely available software. Here, we describe COBRA for Python (COBRApy), a Python package that provides support for basic COBRA methods. COBRApy is designed in an object-oriented fashion that facilitates the representation of the complex biological processes of metabolism and gene expression. COBRApy does not require MATLAB to function; however, it includes an interface to the COBRA Toolbox for MATLAB to facilitate use of legacy codes. For improved performance, COBRApy includes parallel processing support for computationally intensive processes. COBRApy is an object-oriented framework designed to meet the computational challenges associated with the next generation of stoichiometric constraint-based models and high-density omics data sets. http://opencobra.sourceforge.net/

  11. The influence of presentation format on the "bigger is better" (BIB) effect.

    PubMed

    Bromgard, Gregg D; Trafimow, David; Silvera, David H

    2013-04-01

    Two experiments tested the "bigger is better" (BIB) effect, whereby bigger objects are perceived more favorably than smaller ones. In Experiment 1, participants directly compared pairs of objects and a strong BIB effect was obtained for both positively and negatively valenced stimuli. In Experiment 2, comparative and absolute evaluations were combined in a single experiment and the BIB effect was mediated for positively and negatively valenced stimuli. Taken in combination, the data support a complex hypothesis that pair-wise presentations induce a comparative process that causes a BIB effect. But when objects are evaluated separately, size and valence interact such that increased size evokes more positive ratings of positive objects and more negative ratings for negative objects.

  12. Distinct brain activity in processing negative pictures of animals and objects --- the role of human contexts

    PubMed Central

    Cao, Zhijun; Zhao, Yanbing; Tan, Tengteng; Chen, Gang; Ning, Xueling; Zhan, Lexia; Yang, Jiongjiong

    2013-01-01

    Previous studies have shown that the amygdala is important in processing not only animate entities but also social information. It remains to be determined to what extent the factors of category and social context interact to modulate the activities of the amygdala and cortical regions. In this study, pictures depicting animals and inanimate objects in negative and neutral levels were presented. The contexts of the pictures differed in whether they included human/human parts. The factors of valence, arousal, familiarity and complexity of pictures were controlled across categories. The results showed that the amygdala activity was modulated by category and contextual information. Under the nonhuman context condition, the amygdala responded more to animals than objects for both negative and neutral pictures. In contrast, under the human context condition, the amygdala showed stronger activity for negative objects than animals. In addition to cortical regions related to object action, functional and effective connectivity analyses showed that the anterior prefrontal cortex interacted with the amygdala more for negative objects (vs. animals) in the human context condition, by a top-down modulation of the anterior prefrontal cortex to the amygdala. These results highlighted the effects of category and human contexts on modulating brain activity in emotional processing. PMID:24099847

  13. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  14. Behavior analysis of video object in complicated background

    NASA Astrophysics Data System (ADS)

    Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang

    2016-10-01

    This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.

  15. Analysis of haptic information in the cerebral cortex

    PubMed Central

    2016-01-01

    Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level. PMID:27440247

  16. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  17. NASCAP user's manual

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Harvey, J. M.; Katz, I.

    1977-01-01

    The NASCAP (NASA Charging Analyzer Program) code simulates the charging process for a complex object in either tenuous plasma or ground test environment. Detailed specifications needed to run the code are presented. The object definition section, OBJDEF, allows the test object to be easily defined in the cubic mesh. The test object is composed of conducting sections which may be wholly or partially covered with thin dielectric coatings. The potential section, POTENT, obtains the electrostatic potential in the space surrounding the object. It uses the conjugate gradient method to solve the finite element formulation of Poisson's equation. The CHARGE section of NASCAP treats charge redistribution among the surface cells of the object as well as charging through radiation bombardment. NASCAP has facilities for extensive graphical output, including several types of object display plots, potential contour plots, space charge density contour plots, current density plots, and particle trajectory plots.

  18. [Purification of complicated industrial organic waste gas by complex absorption].

    PubMed

    Chen, Ding-Sheng; Cen, Chao-Ping; Tang, Zhi-Xiong; Fang, Ping; Chen, Zhi-Hang

    2011-12-01

    Complicated industrial organic waste gas with the characteristics of low concentration,high wind volume containing inorganic dust and oil was employed the research object by complex absorption. Complex absorption mechanism, process flow, purification equipment and engineering application were studied. Three different surfactants were prepared for the composite absorbent to purify exhaust gas loaded with toluene and butyl acetate, respectively. Results show that the low surface tension of the composite absorbent can improve the removal efficiency of toluene and butyl acetate. With the advantages of the water film, swirl plate and fill absorption device, efficient absorption equipment was developed for the treatment of complicated industrial organic waste gas. It is with superiorities of simple structure, small size, anti-jam and high mass transfer. Based on absorption technology, waste gas treatment process integrated with heating stripping, burning and anaerobic and other processes, so that emissions of waste gas and absorption solution could meet the discharge standards. The technology has been put into practice, such as manufacturing and spraying enterprises.

  19. The role of shape complexity in the detection of closed contours.

    PubMed

    Wilder, John; Feldman, Jacob; Singh, Manish

    2016-09-01

    The detection of contours in noise has been extensively studied, but the detection of closed contours, such as the boundaries of whole objects, has received relatively little attention. Closed contours pose substantial challenges not present in the simple (open) case, because they form the outlines of whole shapes and thus take on a range of potentially important configural properties. In this paper we consider the detection of closed contours in noise as a probabilistic decision problem. Previous work on open contours suggests that contour complexity, quantified as the negative log probability (Description Length, DL) of the contour under a suitably chosen statistical model, impairs contour detectability; more complex (statistically surprising) contours are harder to detect. In this study we extended this result to closed contours, developing a suitable probabilistic model of whole shapes that gives rise to several distinct though interrelated measures of shape complexity. We asked subjects to detect either natural shapes (Exp. 1) or experimentally manipulated shapes (Exp. 2) embedded in noise fields. We found systematic effects of global shape complexity on detection performance, demonstrating how aspects of global shape and form influence the basic process of object detection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Lifetimes of the Vibrational States of DNA Molecules in Functionalized Complexes of Semiconductor Quantum Dots

    NASA Astrophysics Data System (ADS)

    Bayramov, F. B.; Poloskin, E. D.; Chernev, A. L.; Toporov, V. V.; Dubina, M. V.; Sprung, C.; Lipsanen, H. K.; Bairamov, B. Kh.

    2018-01-01

    Results of studying nanocrystalline nc-Si/SiO2 quantum dots (QDs) functionalized by short oligonucleotides show that complexes of isolated crystalline semiconductor QDs are unique objects for detecting the manifestation of new quantum confinement phenomena. It is established that narrow lines observed in high-resolution spectra of inelastic light scattering can be used for determining the characteristic time scale of vibrational excitations of separate nucleotide molecules and for studying structural-dynamic properties of fast oscillatory processes in biomacromolecules.

  1. Technology-design-manufacturing co-optimization for advanced mobile SoCs

    NASA Astrophysics Data System (ADS)

    Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey

    2014-03-01

    How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.

  2. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    PubMed

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Simple and Complex Plants. Fourth Grade. Anchorage School District Elementary Science Program.

    ERIC Educational Resources Information Center

    Anchorage School District, AK.

    This unit contains 15 lessons on Alaskan plants for fourth graders. It describes materials, supplementary materials, use of process skill terminology, unit objectives, vocabulary, background information about five kingdoms of living things, and a webbing activity. Included are: (1) "Roots in Action"; (2) "Chlorophyll"; (3)…

  4. Development of a Rubric to Improve Critical Thinking

    ERIC Educational Resources Information Center

    Hildenbrand, Kasee J.; Schultz, Judy A.

    2012-01-01

    Context: Health care professionals, including athletic trainers are confronted daily with multiple complex problems that require critical thinking. Objective: This research attempts to develop a reliable process to assess students' critical thinking in a variety of athletic training and kinesiology courses. Design: Our first step was to create a…

  5. Effects of Noun Phrase Type on Sentence Complexity

    ERIC Educational Resources Information Center

    Gordon, Peter C.; Hendrick, Randall; Johnson, Marcus

    2004-01-01

    A series of self-paced reading time experiments was performed to assess how characteristics of noun phrases (NPs) contribute to the difference in processing difficulty between object- and subject-extracted relative clauses. Structural semantic characteristics of the NP in the embedded clause (definite vs. indefinite and definite vs. generic) did…

  6. 75 FR 47606 - Strategic Plan for Consumer Education via Cooperative Agreement (U18)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... or quantitative research with stakeholders and meetings with stakeholder groups and consumer experts... and resulting from an extensive consumer research process. In 2007, PFSE joined with USDA to create... responsibilities of FDA. B. Research Objectives PFSE supports a large, complex, and multi-faceted consumer food...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.

    The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less

  8. JPL Counterfeit Parts Avoidance

    NASA Technical Reports Server (NTRS)

    Risse, Lori

    2012-01-01

    SPACE ARCHITECTURE / ENGINEERING: It brings an extreme test bed for both technologies/concepts as well as procedures/processes. Design and construction (engineering) always go together, especially with complex systems. Requirements (objectives) are crucial. More important than the answers are the questions/Requirements/Tools-Techniques/Processes. Different environments force architects and engineering to think out of the box. For instance there might not be gravity forces. Architectural complex problems have common roots: in Space and on Earth. Let us bring Space down on Earth so we can keep sending Mankind to the stars from a better world. Have fun being architects and engineers...!!! This time is amazing and historical. We are changing the way we inhabit the solar systems!

  9. Multiple-Objective Stepwise Calibration Using Luca

    USGS Publications Warehouse

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  10. Automatic Adviser on stationary devices status identification and anticipated change

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Pushkarev, E. A.

    2018-05-01

    A task is defined to synthesize an Automatic Adviser to identify the automation systems stationary devices status using an autoregressive model of changing their key parameters. An applied model type was rationalized and the research objects monitoring process algorithm was developed. A complex of mobile objects status operation simulation and prediction results analysis was proposed. Research results are commented using a specific example of a hump yard compressor station. The work was supported by the Russian Fundamental Research Fund, project No. 17-20-01040.

  11. An Elegant Low-cost Materials Solution for Achieving Low Insertion Loss, Affordable Tunable Filters for Next Generation Mobile Communications Platforms

    DTIC Science & Technology

    2009-04-01

    material design, complex oxide , UV photon irradiation 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON Melanie W. Cole a. REPORT...1 1. Objective The objective of this effort was to develop a novel materials technology solution to achieve high-Q perovskite oxide thin...year 2008 (FY08) Director’s Research Initiative (DRI), we developed a post- growth ultraviolet (UV)- oxidation process science protocol to improve the

  12. Visual Information Processing Based on Spatial Filters Constrained by Biological Data.

    DTIC Science & Technology

    1978-12-01

    was provided by Pantie and Sekuler ( 19681. They found that the detection (if gratings was affected most by adapting isee Section 6.1. 11 to square...evidence for certain eye scans being directed by spatial information in filtered images is given. Eye scan paths of a portrait of a young girl I Figure 08...multistable objects to more complex objects such as the man- girl figure of Fisher 119681, decision boundaries that are a natural concomitant to any pattern

  13. EEG signatures accompanying auditory figure-ground segregation.

    PubMed

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István

    2016-11-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.

  14. Interactions between dorsal and ventral streams for controlling skilled grasp

    PubMed Central

    van Polanen, Vonne; Davare, Marco

    2015-01-01

    The two visual systems hypothesis suggests processing of visual information into two distinct routes in the brain: a dorsal stream for the control of actions and a ventral stream for the identification of objects. Recently, increasing evidence has shown that the dorsal and ventral streams are not strictly independent, but do interact with each other. In this paper, we argue that the interactions between dorsal and ventral streams are important for controlling complex object-oriented hand movements, especially skilled grasp. Anatomical studies have reported the existence of direct connections between dorsal and ventral stream areas. These physiological interconnections appear to be gradually more active as the precision demands of the grasp become higher. It is hypothesised that the dorsal stream needs to retrieve detailed information about object identity, stored in ventral stream areas, when the object properties require complex fine-tuning of the grasp. In turn, the ventral stream might receive up to date grasp-related information from dorsal stream areas to refine the object internal representation. Future research will provide direct evidence for which specific areas of the two streams interact, the timing of their interactions and in which behavioural context they occur. PMID:26169317

  15. Discrimination of complex synthetic echoes by an echolocating bottlenose dolphin

    NASA Astrophysics Data System (ADS)

    Helweg, David A.; Moore, Patrick W.; Dankiewicz, Lois A.; Zafran, Justine M.; Brill, Randall L.

    2003-02-01

    Bottlenose dolphins (Tursiops truncatus) detect and discriminate underwater objects by interrogating the environment with their native echolocation capabilities. Study of dolphins' ability to detect complex (multihighlight) signals in noise suggest echolocation object detection using an approximate 265-μs energy integration time window sensitive to the echo region of highest energy or containing the highlight with highest energy. Backscatter from many real objects contains multiple highlights, distributed over multiple integration windows and with varying amplitude relationships. This study used synthetic echoes with complex highlight structures to test whether high-amplitude initial highlights would interfere with discrimination of low-amplitude trailing highlights. A dolphin was trained to discriminate two-highlight synthetic echoes using differences in the center frequencies of the second highlights. The energy ratio (ΔdB) and the timing relationship (ΔT) between the first and second highlights were manipulated. An iso-sensitivity function was derived using a factorial design testing ΔdB at -10, -15, -20, and -25 dB and ΔT at 10, 20, 40, and 80 μs. The results suggest that the animal processed multiple echo highlights as separable analyzable features in the discrimination task, perhaps perceived through differences in spectral rippling across the duration of the echoes.

  16. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  17. Attentional gating models of object substitution masking.

    PubMed

    Põder, Endel

    2013-11-01

    Di Lollo, Enns, and Rensink (2000) proposed the computational model of object substitution (CMOS) to explain their experimental results with sparse visual maskers. This model supposedly is based on reentrant hypotheses testing in the visual system, and the modeled experiments are believed to demonstrate these reentrant processes in human vision. In this study, I analyze the main assumptions of this model. I argue that CMOS is a version of the attentional gating model and that its relationship with reentrant processing is rather illusory. The fit of this model to the data indicates that reentrant hypotheses testing is not necessary for the explanation of object substitution masking (OSM). Further, the original CMOS cannot predict some important aspects of the experimental data. I test 2 new models incorporating an unselective processing (divided attention) stage; these models are more consistent with data from OSM experiments. My modeling shows that the apparent complexity of OSM can be reduced to a few simple and well-known mechanisms of perception and memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  20. Cochlear Implant: the complexity involved in the decision making process by the family1

    PubMed Central

    Vieira, Sheila de Souza; Bevilacqua, Maria Cecília; Ferreira, Noeli Marchioro Liston Andrade; Dupas, Giselle

    2014-01-01

    Objective to understand the meanings the family attributes to the phases of the decision-making process on a cochlear implant for their child. Method qualitative research, using Symbolic Interactionism and Grounded Theory as the theoretical and methodological frameworks, respectively. Data collection instrument: semistructured interview. Nine families participated in the study (32 participants). Results knowledge deficit, difficulties to contextualize benefits and risks and fear are some factors that make this process difficult. Experiences deriving from interactions with health professionals, other cochlear implant users and their relatives strengthen decision making in favor of the implant. Conclusion deciding on whether or not to have the implant involves a complex process, in which the family needs to weigh gains and losses, experience feelings of accountability and guilt, besides overcoming the risk aversion. Hence, this demands cautious preparation and knowledge from the professionals involved in this intervention. PMID:25029052

  1. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    PubMed

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  2. Optimization of Photosensitized Tryptophan Oxidation in the Presence of Dimegin-Polyvinylpyrrolidone-Chitosan Systems.

    PubMed

    Solovieva, Anna B; Kardumian, Valeria V; Aksenova, Nadezhda A; Belovolova, Lyudmila V; Glushkov, Mikhail V; Bezrukov, Evgeny A; Sukhanov, Roman B; Kotova, Svetlana L; Timashev, Peter S

    2018-05-23

    By the example of a model process of tryptophan photooxidation in the aqueous medium in the presence of a three-component photosensitizing complex (porphyrin photosensitizer-polyvinylpyrrolidone- chitosan, PPS-PVP-CT) in the temperature range of 20-40 °С, we have demonstrated a possibility of modification of such a process by selecting different molar ratios of the components in the reaction mixture. The actual objective of this selection is the formation of a certain PPS-PVP-CT composition in which PVP macromolecules would coordinate with PPS molecules and at the same time practically block the complex binding of PPS molecules with chitosan macromolecules. Such blocking allows utilization of the bactericidal properties of chitosan to a greater extent, since chitosan is known to depress the PPS photosensitizing activity in PPS-PVP-CT complexes when using those in photodynamic therapy (PDT). The optimal composition of photosensitizing complexes appears to be dependent on the temperature at which the PDT sessions are performed. We have analyzed the correlations of the effective rate constants of tryptophan photooxidation with the photophysical characteristics of the formed complexes.

  3. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation.

    PubMed

    Leung, Ada W S; Jolicoeur, Pierre; Alain, Claude

    2015-11-01

    Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.

  4. Enhancing Perception in Ethical Decision Making: A Method to Address Ill-Defined Training Domains

    DTIC Science & Technology

    2010-08-01

    revolution in the ethics of warfare. Albany, NY: State University of New York Press. Craik , F.I., & Lockhart , R.S. (1972). Levels of processing ...trainees in meeting their shared training objectives. In this way, the Army can draw together the individual level interpretive processes with the...interpret their situation in a personally meaningful way (cf. Craik & Lockhart , 1972). There are many complexities present in a training situation

  5. Electrophysiological signatures of event words: Dissociating syntactic and semantic category effects in lexical processing.

    PubMed

    Lapinskaya, Natalia; Uzomah, Uchechukwu; Bedny, Marina; Lau, Ellen

    2016-12-01

    Numerous theories have been proposed regarding the brain's organization and retrieval of lexical information. Neurophysiological dissociations in processing different word classes, particularly nouns and verbs, have been extensively documented, supporting the contribution of grammatical class to lexical organization. However, the contribution of semantic properties to these processing differences is still unresolved. We aim to isolate this contribution by comparing ERPs to verbs (e.g. wade), object nouns (e.g. cookie), and event nouns (e.g. concert) in a paired similarity judgment task, as event nouns share grammatical category with object nouns but some semantic properties with verbs. We find that event nouns pattern with verbs in eliciting a more positive response than object nouns across left anterior electrodes 300-500ms after word presentation. This time-window has been strongly linked to lexical-semantic access by prior electrophysiological work. Thus, the similarity of the response to words referring to concepts with more complex participant structure and temporal continuity extends across grammatical class (event nouns and verbs), and contrasts with the words that refer to objects (object nouns). This contrast supports a semantic, as well as syntactic, contribution to the differential neural organization and processing of lexical items. We also observed a late (500-800ms post-stimulus) posterior positivity for object nouns relative to event nouns and verbs at the second word of each pair, which may reflect the impact of semantic properties on the similarity judgment task. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  7. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  8. UML as a cell and biochemistry modeling language.

    PubMed

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  9. Potential of Laboratory Execution Systems (LESs) to Simplify the Application of Business Process Management Systems (BPMSs) in Laboratory Automation.

    PubMed

    Neubert, Sebastian; Göde, Bernd; Gu, Xiangyu; Stoll, Norbert; Thurow, Kerstin

    2017-04-01

    Modern business process management (BPM) is increasingly interesting for laboratory automation. End-to-end workflow automation and improved top-level systems integration for information technology (IT) and automation systems are especially prominent objectives. With the ISO Standard Business Process Model and Notation (BPMN) 2.X, a system-independent and interdisciplinary accepted graphical process control notation is provided, allowing process analysis, while also being executable. The transfer of BPM solutions to structured laboratory automation places novel demands, for example, concerning the real-time-critical process and systems integration. The article discusses the potential of laboratory execution systems (LESs) for an easier implementation of the business process management system (BPMS) in hierarchical laboratory automation. In particular, complex application scenarios, including long process chains based on, for example, several distributed automation islands and mobile laboratory robots for a material transport, are difficult to handle in BPMSs. The presented approach deals with the displacement of workflow control tasks into life science specialized LESs, the reduction of numerous different interfaces between BPMSs and subsystems, and the simplification of complex process modelings. Thus, the integration effort for complex laboratory workflows can be significantly reduced for strictly structured automation solutions. An example application, consisting of a mixture of manual and automated subprocesses, is demonstrated by the presented BPMS-LES approach.

  10. Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure.

    PubMed

    Mickelson, Robin S; Unertl, Kim M; Holden, Richard J

    2016-10-12

    Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. We identified 5 high-level macrocognitive processes affecting medication management-sensemaking, planning, coordination, monitoring, and decision making-and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation.

  11. Network modulation during complex syntactic processing

    PubMed Central

    den Ouden, Dirk-Bart; Saur, Dorothee; Mader, Wolfgang; Schelter, Björn; Lukic, Sladjana; Wali, Eisha; Timmer, Jens; Thompson, Cynthia K.

    2011-01-01

    Complex sentence processing is supported by a left-lateralized neural network including inferior frontal cortex and posterior superior temporal cortex. This study investigates the pattern of connectivity and information flow within this network. We used fMRI BOLD data derived from 12 healthy participants reported in an earlier study (Thompson, C. K., Den Ouden, D. B., Bonakdarpour, B., Garibaldi, K., & Parrish, T. B. (2010b). Neural plasticity and treatment-induced recovery of sentence processing in agrammatism. Neuropsychologia, 48(11), 3211-3227) to identify activation peaks associated with object-cleft over syntactically less complex subject-cleft processing. Directed Partial Correlation Analysis was conducted on time series extracted from participant-specific activation peaks and showed evidence of functional connectivity between four regions, linearly between premotor cortex, inferior frontal gyrus, posterior superior temporal sulcus and anterior middle temporal gyrus. This pattern served as the basis for Dynamic Causal Modeling of networks with a driving input to posterior superior temporal cortex, which likely supports thematic role assignment, and networks with a driving input to inferior frontal cortex, a core region associated with syntactic computation. The optimal model was determined through both frequentist and Bayesian model selection and turned out to reflect a network with a primary drive from inferior frontal cortex and modulation of the connection between inferior frontal and posterior superior temporal cortex by complex sentence processing. The winning model also showed a substantive role for a feedback mechanism from posterior superior temporal cortex back to inferior frontal cortex. We suggest that complex syntactic processing is driven by word-order analysis, supported by inferior frontal cortex, in an interactive relation with posterior superior temporal cortex, which supports verb argument structure processing. PMID:21820518

  12. System for decision analysis support on complex waste management issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shropshire, D.E.

    1997-10-01

    A software system called the Waste Flow Analysis has been developed and applied to complex environmental management processes for the United States Department of Energy (US DOE). The system can evaluate proposed methods of waste retrieval, treatment, storage, transportation, and disposal. Analysts can evaluate various scenarios to see the impacts to waste slows and schedules, costs, and health and safety risks. Decision analysis capabilities have been integrated into the system to help identify preferred alternatives based on a specific objectives may be to maximize the waste moved to final disposition during a given time period, minimize health risks, minimize costs,more » or combinations of objectives. The decision analysis capabilities can support evaluation of large and complex problems rapidly, and under conditions of variable uncertainty. The system is being used to evaluate environmental management strategies to safely disposition wastes in the next ten years and reduce the environmental legacy resulting from nuclear material production over the past forty years.« less

  13. Scientific Ground of a New Optical Device for Contactless Measurement of the Small Spatial Displacements of Control Object Surfaces

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, I. P.; Parinov, I. A.

    2017-06-01

    It is proposed the computational-experimental ground of newly developed optical device for contactless measurement of small spatial displacements of control object surfaces based on the use of new methods of laser interferometry. The proposed device allows one to register linear and angular components of the small displacements of control object surfaces during the diagnosis of the condition of structural materials for forced elements of goods under exploring by using acoustic non-destructive testing methods. The described results are the most suitable for application in the process of high-precision measurements of small linear and angular displacements of control object surfaces during experimental research, the evaluation and diagnosis of the state of construction materials for forced elements of goods, the study of fast wave propagation in layered constructions of complex shape, manufactured of anisotropic composite materials, the study of damage processes in modern construction materials in mechanical engineering, shipbuilding, aviation, instrumentation, power engineering, etc.

  14. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  15. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  16. TkPl_SU: An Open-source Perl Script Builder for Seismic Unix

    NASA Astrophysics Data System (ADS)

    Lorenzo, J. M.

    2017-12-01

    TkPl_SU (beta) is a graphical user interface (GUI) to select parameters for Seismic Unix (SU) modules. Seismic Unix (Stockwell, 1999) is a widely distributed free software package for processing seismic reflection and signal processing. Perl/Tk is a mature, well-documented and free object-oriented graphical user interface for Perl. In a classroom environment, shell scripting of SU modules engages students and helps focus on the theoretical limitations and strengths of signal processing. However, complex interactive processing stages, e.g., selection of optimal stacking velocities, killing bad data traces, or spectral analysis requires advanced flows beyond the scope of introductory classes. In a research setting, special functionality from other free seismic processing software such as SioSeis (UCSD-NSF) can be incorporated readily via an object-oriented style to programming. An object oriented approach is a first step toward efficient extensible programming of multi-step processes, and a simple GUI simplifies parameter selection and decision making. Currently, in TkPl_SU, Perl 5 packages wrap 19 of the most common SU modules that are used in teaching undergraduate and first-year graduate student classes (e.g., filtering, display, velocity analysis and stacking). Perl packages (classes) can advantageously add new functionality around each module and clarify parameter names for easier usage. For example, through the use of methods, packages can isolate the user from repetitive control structures, as well as replace the names of abbreviated parameters with self-describing names. Moose, an extension of the Perl 5 object system, greatly facilitates an object-oriented style. Perl wrappers are self-documenting via Perl programming document markup language.

  17. Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.

    PubMed

    Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues

    2012-09-01

    The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Identification and characterization of low-mass stars and brown dwarfs using Virtual Observatory tools.

    NASA Astrophysics Data System (ADS)

    Aberasturi, M.; Solano, E.; Martín, E.

    2015-05-01

    Low-mass stars and brown dwarfs (with spectral types M, L, T and Y) are the most common objects in the Milky Way. A complete census of these objects is necessary to understand the theories about their complex structure and formation processes. In order to increase the number of known objects in the Solar neighborhood (d<30 pc), we have made use of the Virtual Observatory which allows an efficient handling of the huge amount of information available in astronomical databases. We also used the WFC3 installed in the Hubble Space Telescope to look for T5+ dwarfs binaries.

  19. Online fully automated three-dimensional surface reconstruction of unknown objects

    NASA Astrophysics Data System (ADS)

    Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David

    2015-04-01

    This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.

  20. A lattice model for data display

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.

    1994-01-01

    In order to develop a foundation for visualization, we develop lattice models for data objects and displays that focus on the fact that data objects are approximations to mathematical objects and real displays are approximations to ideal displays. These lattice models give us a way to quantize the information content of data and displays and to define conditions on the visualization mappings from data to displays. Mappings satisfy these conditions if and only if they are lattice isomorphisms. We show how to apply this result to scientific data and display models, and discuss how it might be applied to recursively defined data types appropriate for complex information processing.

  1. Small target detection using objectness and saliency

    NASA Astrophysics Data System (ADS)

    Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao

    2017-10-01

    We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.

  2. Late electrophysiological modulations of feature-based attention to object shapes.

    PubMed

    Stojanoski, Bobby Boge; Niemeier, Matthias

    2014-03-01

    Feature-based attention has been shown to aid object perception. Our previous ERP effects revealed temporally late feature-based modulation in response to objects relative to motion. The aim of the current study was to confirm the timing of feature-based influences on object perception while cueing within the feature dimension of shape. Participants were told to expect either "pillow" or "flower" objects embedded among random white and black lines. Participants more accurately reported the object's main color for valid compared to invalid shapes. ERPs revealed modulation from 252-502 ms, from occipital to frontal electrodes. Our results are consistent with previous findings examining the time course for processing similar stimuli (illusory contours). Our results provide novel insights into how attending to features of higher complexity aids object perception presumably via feed-forward and feedback mechanisms along the visual hierarchy. Copyright © 2014 Society for Psychophysiological Research.

  3. Optimization of controlled processes in combined-cycle plant (new developments and researches)

    NASA Astrophysics Data System (ADS)

    Tverskoy, Yu S.; Muravev, I. K.

    2017-11-01

    All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.

  4. Balancing emotion and cognition: a case for decision aiding in conservation efforts.

    PubMed

    Wilson, Robyn S

    2008-12-01

    Despite advances in the quality of participatory decision making for conservation, many current efforts still suffer from an inability to bridge the gap between science and policy. Judgment and decision-making research suggests this gap may result from a person's reliance on affect-based shortcuts in complex decision contexts. I examined the results from 3 experiments that demonstrate how affect (i.e., the instantaneous reaction one has to a stimulus) influences individual judgments in these contexts and identified techniques from the decision-aiding literature that help encourage a balance between affect-based emotion and cognition in complex decision processes. In the first study, subjects displayed a lack of focus on their stated conservation objectives and made decisions that reflected their initial affective impressions. Value-focused approaches may help individuals incorporate all the decision-relevant objectives by making the technical and value-based objectives more salient. In the second study, subjects displayed a lack of focus on statistical risk and again made affect-based decisions. Trade-off techniques may help individuals incorporate relevant technical data, even when it conflicts with their initial affective impressions or other value-based objectives. In the third study, subjects displayed a lack of trust in decision-making authorities when the decision involved a negatively affect-rich outcome (i.e., a loss). Identifying shared salient values and increasing procedural fairness may help build social trust in both decision-making authorities and the decision process.

  5. Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.

    PubMed

    Galinski, Daniel; Sapin, Julien; Dehez, Bruno

    2013-06-01

    This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.

  6. A modular assembling platform for manufacturing of microsystems by optical tweezers

    NASA Astrophysics Data System (ADS)

    Ksouri, Sarah Isabelle; Aumann, Andreas; Ghadiri, Reza; Prüfer, Michael; Baer, Sebastian; Ostendorf, Andreas

    2013-09-01

    Due to the increased complexity in terms of materials and geometries for microsystems new assembling techniques are required. Assembling techniques from the semiconductor industry are often very specific and cannot fulfill all specifications in more complex microsystems. Therefore, holographic optical tweezers are applied to manipulate structures in micrometer range with highest flexibility and precision. As is well known non-spherical assemblies can be trapped and controlled by laser light and assembled with an additional light modulator application, where the incident laser beam is rearranged into flexible light patterns in order to generate multiple spots. The complementary building blocks are generated by a two-photon-polymerization process. The possibilities of manufacturing arbitrary microstructures and the potential of optical tweezers lead to the idea of combining manufacturing techniques with manipulation processes to "microrobotic" processes. This work presents the manipulation of generated complex microstructures with optical tools as well as a storage solution for 2PP assemblies. A sample holder has been developed for the manual feeding of 2PP building blocks. Furthermore, a modular assembling platform has been constructed for an `all-in-one' 2PP manufacturing process as a dedicated storage system. The long-term objective is the automation process of feeding and storage of several different 2PP micro-assemblies to realize an automated assembly process.

  7. Automatic anatomy recognition using neural network learning of object relationships via virtual landmarks

    NASA Astrophysics Data System (ADS)

    Yan, Fengxia; Udupa, Jayaram K.; Tong, Yubing; Xu, Guoping; Odhner, Dewey; Torigian, Drew A.

    2018-03-01

    The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.

  8. Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1985-01-01

    A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.

  9. Instructional Strategy: Didactic Media Presentation to Optimize Student Learning

    ERIC Educational Resources Information Center

    Schilling, Jim

    2017-01-01

    Context: Subject matter is presented to athletic training students in the classroom using various modes of media. The specific type of mode and when to use it should be considered to maximize learning effectiveness. Other factors to consider in this process include a student's knowledge base and the complexity of material. Objective: To introduce…

  10. Effects of LifeSkills Training on Medical Students' Performance in Dealing with Complex Clinical Cases

    ERIC Educational Resources Information Center

    Campo, Ana E.; Williams, Virginia; Williams, Redford B.; Segundo, Marisol A.; Lydston, David; Weiss, Stephen M.'

    2008-01-01

    Objective: Sound clinical judgment is the cornerstone of medical practice and begins early during medical education. The authors consider the effect of personality characteristics (hostility, anger, cynicism) on clinical judgment and whether a brief intervention can affect this process. Methods: Two sophomore medical classes (experimental,…

  11. Prioritization of forest restoration projects: Tradeoffs between wildfire protection, ecological restoration and economic objectives

    Treesearch

    Kevin C. Vogler; Alan A. Ager; Michelle A. Day; Michael Jennings; John D. Bailey

    2015-01-01

    The implementation of US federal forest restoration programs on national forests is a complex process that requires balancing diverse socioecological goals with project economics. Despite both the large geographic scope and substantial investments in restoration projects, a quantitative decision support framework to locate optimal project areas and examine...

  12. Modeling of Students' Profile and Learning Chronicle with Data Cubes

    ERIC Educational Resources Information Center

    Ola, Ade G.; Bai, Xue; Omojokun, Emmanuel E.

    2014-01-01

    Over the years, companies have relied on On-Line Analytical Processing (OLAP) to answer complex questions relating to issues in business environments such as identifying profitability, trends, correlations, and patterns. This paper addresses the application of OLAP in education and learning. The objective of the research presented in the paper is…

  13. A Game Simulation of Multilateral Trade for Classroom Use.

    ERIC Educational Resources Information Center

    Thompson, Gary L.; Carter, Ronald L.

    An alternative to existing methods for teaching elementary economic geography courses was developed in a game format to teach the basic process of trade through role playing. Simplifying the complexities of multilateral trade to a few basic decisions and acts, the cognitive objectives are to develop in the student: 1) an understanding of regional…

  14. Spatial patterns of throughfall isotopic composition at the event and seasonal timescales

    Treesearch

    Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell

    2015-01-01

    Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...

  15. An Optimization Model for the Allocation of University Based Merit Aid

    ERIC Educational Resources Information Center

    Sugrue, Paul K.

    2010-01-01

    The allocation of merit-based financial aid during the college admissions process presents postsecondary institutions with complex and financially expensive decisions. This article describes the application of linear programming as a decision tool in merit based financial aid decisions at a medium size private university. The objective defined for…

  16. Teaching Supply Chain Management Complexities: A SCOR Model Based Classroom Simulation

    ERIC Educational Resources Information Center

    Webb, G. Scott; Thomas, Stephanie P.; Liao-Troth, Sara

    2014-01-01

    The SCOR (Supply Chain Operations Reference) Model Supply Chain Classroom Simulation is an in-class experiential learning activity that helps students develop a holistic understanding of the processes and challenges of supply chain management. The simulation has broader learning objectives than other supply chain related activities such as the…

  17. Simulating tracer transport in variably saturated soils and shallow groundwater

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to develop a realistic model to simulate the complex processes of flow and tracer transport in variably saturated soils and to compare simulation results with the detailed monitoring observations. The USDA-ARS OPE3 field site was selected for the case study due to ava...

  18. Linear Multimedia Benefits To Enhance Students' Ability To Comprehend Complex Subjects.

    ERIC Educational Resources Information Center

    Handal, Gilbert A.; Leiner, Marie A.; Gonzalez, Carlos; Rogel, Erika

    The main objective of this program was to produce animated educational material to stimulate students' interest and learning process related to the sciences and to measure their impact. The program material was designed to support middle school educators with an effective, accessible, and novel didactic tool produced specifically to enhance and…

  19. Considering Materiality in Educational Policy: Messy Objects and Multiple Reals

    ERIC Educational Resources Information Center

    Fenwick, Tara; Edwards, Richard

    2011-01-01

    Educational analysts need new ways to engage with policy processes in a networked world of complex transnational connections. In this discussion, Tara Fenwick and Richard Edwards argue for a greater focus on materiality in educational policy as a way to trace the heterogeneous interactions and precarious linkages that enact policy as complex…

  20. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  1. [Parametabolism as Non-Specific Modifier of Supramolecular Interactions in Living Systems].

    PubMed

    Kozlov, V A; Sapozhnikov, S P; Sheptuhina, A I; Golenkov, A V

    2015-01-01

    As it became known recently, in addition to the enzyme (enzymes and/or ribozymes) in living organisms occur a large number of ordinary chemical reactions without the participation of biological catalysts. These reactions are distinguished by low speed and, as a rule, the irreversibility. For example, along with diabetes mellitus, glycation and fructosilation of proteins are observed resulted in posttranslational modification with the low- or nonfunctioning protein formation which is poorly exposed to enzymatic proteolysis and therefore accumulates in the body. In addition, the known processes such as the nonenzymatic carbomoylation, pyridoxylation and thiamiation proteins. There is a reasonable basis to believe that alcoholic injury also realized through parametabolic secondary metabolites synthesis such as acetaldehyde. At the same time, the progress in supramolecular chemistry proves that in biological objects there is another large group ofparametabolic reactions caused by the formation of supramolecular complexes. Obviously, known parameterizes interactions can modify the formation of supramolecular complexes in living objects. These processes are of considerable interest for fundamental biology and fundamental and practical medicine, but they remain unexplored due to a lack of awareness of a wide range of researchers.

  2. Development studies of a novel wet oxidation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, T.W.; Dhooge, P.M.

    1995-10-01

    Many DOE waste streams and remediates contain complex and variable mixtures of organic compounds, toxic metals, and radionuclides. These materials are often dispersed in organic or inorganic matrices, such as personal protective equipment, various sludges, soils, and water. Incineration and similar combustive processes do not appear to be viable options for treatment of these waste streams due to various considerations. The objective of this project is to develop a novel catalytic wet oxidation process for the treatment of multi-component wastes. The DETOX process uses a unique combination of metal catalysts to increase the rate of oxidation of organic materials.

  3. Building quality into medical product software design.

    PubMed

    Mallory, S R

    1993-01-01

    The software engineering and quality assurance disciplines are a requisite to the design of safe and effective software-based medical devices. It is in the areas of software methodology and process that the most beneficial application of these disciplines to software development can be made. Software is a product of complex operations and methodologies and is not amenable to the traditional electromechanical quality assurance processes. Software quality must be built in by the developers, with the software verification and validation engineers acting as the independent instruments for ensuring compliance with performance objectives and with development and maintenance standards. The implementation of a software quality assurance program is a complex process involving management support, organizational changes, and new skill sets, but the benefits are profound. Its rewards provide safe, reliable, cost-effective, maintainable, and manageable software, which may significantly speed the regulatory review process and therefore potentially shorten the overall time to market. The use of a trial project can greatly facilitate the learning process associated with the first-time application of a software quality assurance program.

  4. Three-Dimensional Computer Simulation as an Important Competence Based Aspect of a Modern Mining Professional

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Pachkina, Anna

    2017-11-01

    The article deals with the problem of necessity of educational process transformation to meet the requirements of modern miming industry; cooperative developing of new educational programs and implementation of educational process taking into account modern manufacturability. The paper proves the idea of introduction into mining professionals learning process studying of three-dimensional models of surface technological complex, ore reserves and underground digging complex as well as creating these models in different graphic editors and working with the information analysis model obtained on the basis of these three-dimensional models. The technological process of manless coal mining at the premises of the mine Polysaevskaya controlled by the information analysis models built on the basis of three-dimensional models of individual objects and technological process as a whole, and at the same time requiring the staff able to use the programs of three-dimensional positioning in the miners and equipment global frame of reference is covered.

  5. RF tomography of metallic objects in free space: preliminary results

    NASA Astrophysics Data System (ADS)

    Li, Jia; Ewing, Robert L.; Berdanier, Charles; Baker, Christopher

    2015-05-01

    RF tomography has great potential in defense and homeland security applications. A distributed sensing research facility is under development at Air Force Research Lab. To develop a RF tomographic imaging system for the facility, preliminary experiments have been performed in an indoor range with 12 radar sensors distributed on a circle of 3m radius. Ultra-wideband pulses are used to illuminate single and multiple metallic targets. The echoes received by distributed sensors were processed and combined for tomography reconstruction. Traditional matched filter algorithm and truncated singular value decomposition (SVD) algorithm are compared in terms of their complexity, accuracy, and suitability for distributed processing. A new algorithm is proposed for shape reconstruction, which jointly estimates the object boundary and scatter points on the waveform's propagation path. The results show that the new algorithm allows accurate reconstruction of object shape, which is not available through the matched filter and truncated SVD algorithms.

  6. Clinical simulation as a boundary object in design of health IT-systems.

    PubMed

    Rasmussen, Stine Loft; Jensen, Sanne; Lyng, Karen Marie

    2013-01-01

    Healthcare organizations are very complex, holding numerous stakeholders with various approaches and goals towards the design of health IT-systems. Some of these differences may be approached by applying the concept of boundary objects in a participatory IT-design process. Traditionally clinical simulation provides the opportunity to evaluate the design and the usage of clinical IT-systems without endangering the patients and interrupting clinical work. In this paper we present how clinical simulation additionally holds the potential to function as a boundary object in the design process. The case points out that clinical simulation provides an opportunity for discussions and mutual learning among the various stakeholders involved in design of standardized electronic clinical documentation templates. The paper presents and discusses the use of clinical simulation in the translation, transfer and transformation of knowledge between various stakeholders in a large healthcare organization.

  7. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  8. Multi-Objective Hybrid Optimal Control for Interplanetary Mission Planning

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Vavrina, Matthew; Ghosh, Alexander

    2015-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed and in some cases the final destination. In addition, a time-history of control variables must be chosen which defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very diserable. This work presents such as an approach by posing the mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on a hypothetical mission to the main asteroid belt.

  9. Nuclear Criticality Safety Data Book

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollenbach, D. F.

    The objective of this document is to support the revision of criticality safety process studies (CSPSs) for the Uranium Processing Facility (UPF) at the Y-12 National Security Complex (Y-12). This design analysis and calculation (DAC) document contains development and justification for generic inputs typically used in Nuclear Criticality Safety (NCS) DACs to model both normal and abnormal conditions of processes at UPF to support CSPSs. This will provide consistency between NCS DACs and efficiency in preparation and review of DACs, as frequently used data are provided in one reference source.

  10. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  11. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  12. Visuo-spatial orienting during active exploratory behavior: Processing of task-related and stimulus-related signals.

    PubMed

    Macaluso, Emiliano; Ogawa, Akitoshi

    2018-05-01

    Functional imaging studies have associated dorsal and ventral fronto-parietal regions with the control of visuo-spatial attention. Previous studies demonstrated that the activity of both the dorsal and the ventral attention systems can be modulated by many different factors, related both to the stimuli and the task. However, the vast majority of this work utilized stereotyped paradigms with simple and repeated stimuli. This is at odd with any real life situation that instead involve complex combinations of different types of co-occurring signals, thus raising the question of the ecological significance of the previous findings. Here we investigated how the brain responds to task-related and stimulus-related signals using an innovative approach that involved active exploration of a virtual environment. This enabled us to study visuo-spatial orienting in conditions entailing a dynamic and coherent flow of visual signals, to some extent analogous to real life situations. The environment comprised colored/textured spheres and cubes, which allowed us to implement a standard feature-conjunction search task (task-related signals), and included one physically salient object that served to track the processing of stimulus-related signals. The imaging analyses showed that the posterior parietal cortex (PPC) activated when the participants' gaze was directed towards the salient-objects. By contrast, the right inferior partial cortex was associated with the processing of the target-objects and of distractors that shared the target-color and shape, consistent with goal-directed template-matching operations. The study highlights the possibility of combining measures of gaze orienting and functional imaging to investigate the processing of different types of signals during active behavior in complex environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed

    O'Neill, M A; Hilgetag, C C

    2001-08-29

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.

  14. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed Central

    O'Neill, M A; Hilgetag, C C

    2001-01-01

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702

  15. Observations on Complexity and Costs for Over Three Decades of Communications Satellites

    NASA Astrophysics Data System (ADS)

    Bearden, David A.

    2002-01-01

    This paper takes an objective look at approximately thirty communications satellites built over three decades using a complexity index as an economic model. The complexity index is derived from a number of technical parameters including dry mass, end-of-life- power, payload type, communication bands, spacecraft lifetime, and attitude control approach. Complexity is then plotted versus total satellite cost and development time (defined as contract start to first launch). A comparison of the relative cost and development time for various classes of communications satellites and conclusions regarding dependence on system complexity are presented. Observations regarding inherent differences between commercially acquired systems and those procured by government organizations are also presented. A process is described where a new communications system in the formative stage may be compared against similarly "complex" missions of the recent past to balance risk within allotted time and funds. 1

  16. Executive Overview of SEI MOSAIC: Managing for Success Using a Risk-Based Approach

    DTIC Science & Technology

    2007-03-01

    and pro - vides the lens through which all potential outcomes are viewed and interpreted. Defining the con - text is thus an essential first step when...Success Analysis and Improvement Crite- ria (SEI MOSAIC)—a suite of advanced analysis methods for assessing complex, distributed pro - grams, processes...achieve that set of objectives, four ac- tivities must be executed in the order shown, while also adhering to any cost and schedule con - straints. Process

  17. Human Centered Computing for Mars Exploration

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2005-01-01

    The science objectives are to determine the aqueous, climatic, and geologic history of a site on Mars where conditions may have been favorable to the preservation of evidence of prebiotic or biotic processes. Human Centered Computing is a development process that starts with users and their needs, rather than with technology. The goal is a system design that serves the user, where the technology fits the task and the complexity is that of the task not of the tool.

  18. Weaving meanings from the deliberative process of collegiate management in nursing1

    PubMed Central

    Higashi, Giovana Dorneles Callegaro; Erdmann, Alacoque Lorenzini

    2014-01-01

    Objective to understand the meanings of the collegiate deliberations attributed by its members on an undergraduate nursing course. Method Grounded Theory, interviews being held with 30 participants, making up 4 sample groups, between January and June 2012, in a public higher education institution. Result 5 categories emerged, indicating the phenomenon and weaving the paradigmatic model: Understanding the experience of the complex relationships and interactions in the deliberations of collegiate management in nursing: intertwining divergences, convergences, dialogs, collectivities and diversities. This deliberative process presents various meanings involving discussion, and divergent, convergent and complementary positions, through dialog, commitment and negotiation. Conclusion the deliberations in the collegiate of nursing, intertwining dialogs, collectivities and diversities, mold the complex relational fabrics. PMID:26107835

  19. Simplified power processing for ion-thruster subsystems

    NASA Technical Reports Server (NTRS)

    Wessel, F. J.; Hancock, D. J.

    1983-01-01

    Compared to chemical propulsion, ion propulsion offers distinct payload-mass increases for many future low-thrust earth-orbital and deep-space missions. Despite this advantage, the high initial cost and complexity of ion-propulsion subsystems reduce their attractiveness for most present and near-term spacecraft missions. Investigations have, therefore, been conducted with the objective to attempt to simplify the power-processing unit (PPU), which is the single most complex and expensive component in the thruster subsystem. The present investigation is concerned with a program to simplify the design of the PPU employed in a 8-cm mercury-ion-thruster subsystem. In this program a dramatic simplification in the design of the PPU could be achieved, while retaining essential thruster control and subsystem operational flexibility.

  20. Markov decision processes in natural resources management: observability and uncertainty

    USGS Publications Warehouse

    Williams, Byron K.

    2015-01-01

    The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.

  1. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr

    2016-03-01

    The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.

  2. Cation exchange concentraion of the Americium product from TRUEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barney, G.S.; Cooper, T.D.; Fisher, F.D.

    1991-06-01

    A transuranic extraction (TRUEX) process has been developed to separate and recover plutonium, americium, and other transuranic (TRU) elements from acid wastes. The main objective of the process is to reduce the effluent to below the TRU limit for actinide concentrations (<100 nCi/g of material) so it can be disposed of inexpensively. The process yields a dilute nitric acid stream containing low concentrations of the extracted americium product. This solution also contains residual plutonium and trace amounts of iron. The americium will be absorbed into a cation exchange resin bed to concentrate it for disposal or for future use. Themore » overall objective of these laboratory tests was to determine the performance of the cation exchange process under expected conditions of the TRUEX process. Effects of acid, iron, and americium concentrations on americium absorption on the resin were determined. Distribution coefficients for americium absorption from acide solutions on the resin were measured using batch equilibrations. Batch equilibrations were also used to measure americium absorption in the presence of complexants. This data will be used to identify complexants and solution conditions that can be used to elute the americium from the columns. The rate of absorption was measured by passing solutions containing americium through small columns of resin, varying the flowrates, and measuring the concentrations of americium in the effluent. The rate data will be used to estimate the minimum bed size of the columns required to concentrate the americium product. 11 refs. , 10 figs., 2 tabs.« less

  3. Cortical systems mediating visual attention to both objects and spatial locations

    PubMed Central

    Shomstein, Sarah; Behrmann, Marlene

    2006-01-01

    Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559

  4. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object

    PubMed Central

    A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.

    2017-01-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869

  5. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  6. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.

    PubMed Central

    Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B

    1995-01-01

    The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258

  7. Three-dimensional (3D) printing and its applications for aortic diseases.

    PubMed

    Hangge, Patrick; Pershad, Yash; Witting, Avery A; Albadawi, Hassan; Oklu, Rahmi

    2018-04-01

    Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases.

  8. Sorting of Streptomyces Cell Pellets Using a Complex Object Parametric Analyzer and Sorter

    PubMed Central

    Petrus, Marloes L. C.; van Veluw, G. Jerre; Wösten, Han A. B.; Claessen, Dennis

    2014-01-01

    Streptomycetes are filamentous soil bacteria that are used in industry for the production of enzymes and antibiotics. When grown in bioreactors, these organisms form networks of interconnected hyphae, known as pellets, which are heterogeneous in size. Here we describe a method to analyze and sort mycelial pellets using a Complex Object Parametric Analyzer and Sorter (COPAS). Detailed instructions are given for the use of the instrument and the basic statistical analysis of the data. We furthermore describe how pellets can be sorted according to user-defined settings, which enables downstream processing such as the analysis of the RNA or protein content. Using this methodology the mechanism underlying heterogeneous growth can be tackled. This will be instrumental for improving streptomycetes as a cell factory, considering the fact that productivity correlates with pellet size. PMID:24561666

  9. Methods for preparation of three-dimensional bodies

    DOEpatents

    Mulligan, Anthony C.; Rigali, Mark J.; Sutaria, Manish P.; Artz, Gregory J.; Gafner, Felix H.; Vaidyanathan, K. Ranji

    2004-09-28

    Processes for mechanically fabricating two and three-dimensional fibrous monolith composites include preparing a fibrous monolith filament from a core composition of a first powder material and a boundary material of a second powder material. The filament includes a first portion of the core composition surrounded by a second portion of the boundary composition. One or more filaments are extruded through a mechanically-controlled deposition nozzle onto a working surface to create a fibrous monolith composite object. The objects may be formed directly from computer models and have complex geometries.

  10. Methods for preparation of three-dimensional bodies

    DOEpatents

    Mulligan, Anthony C [Tucson, AZ; Rigali, Mark J [Carlsbad, NM; Sutaria, Manish P [Malden, MA; Artz, Gregory J [Tucson, AZ; Gafner, Felix H [Tucson, AZ; Vaidyanathan, K Ranji [Tucson, AZ

    2008-06-17

    Processes for mechanically fabricating two and three-dimensional fibrous monolith composites include preparing a fibrous monolith filament from a core composition of a first powder material and a boundary material of a second powder material. The filament includes a first portion of the core composition surrounded by a second portion of the boundary composition. One or more filaments are extruded through a mechanically-controlled deposition nozzle onto a working surface to create a fibrous monolith composite object. The objects may be formed directly from computer models and have complex geometries.

  11. Using the SWAT model to improve process descriptions and define hydrologic partitioning in South Korea

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.

    2014-02-01

    Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.

  12. Atmospheric processes over complex terrain

    NASA Astrophysics Data System (ADS)

    Banta, Robert M.; Berri, G.; Blumen, William; Carruthers, David J.; Dalu, G. A.; Durran, Dale R.; Egger, Joseph; Garratt, J. R.; Hanna, Steven R.; Hunt, J. C. R.

    1990-06-01

    A workshop on atmospheric processes over complex terrain, sponsored by the American Meteorological Society, was convened in Park City, Utah from 24 vto 28 October 1988. The overall objective of the workshop was one of interaction and synthesis--interaction among atmospheric scientists carrying out research on a variety of orographic flow problems, and a synthesis of their results and points of view into an assessment of the current status of topical research problems. The final day of the workshop was devoted to an open discussion on the research directions that could be anticipated in the next decade because of new and planned instrumentation and observational networks, the recent emphasis on development of mesoscale numerical models, and continual theoretical investigations of thermally forced flows, orographic waves, and stratified turbulence. This monograph represents an outgrowth of the Park City Workshop. The authors have contributed chapters based on their lecture material. Workshop discussions indicated interest in both the remote sensing and predictability of orographic flows. These chapters were solicited following the workshop in order to provide a more balanced view of current progress and future directions in research on atmospheric processes over complex terrain.

  13. Objects and categories: feature statistics and object processing in the ventral stream.

    PubMed

    Tyler, Lorraine K; Chiu, Shannon; Zhuang, Jie; Randall, Billi; Devereux, Barry J; Wright, Paul; Clarke, Alex; Taylor, Kirsten I

    2013-10-01

    Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions-the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.

  14. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  15. Sensitivity of Precipitation in Coupled Land-Atmosphere Models

    NASA Technical Reports Server (NTRS)

    Neelin, David; Zeng, N.; Suarez, M.; Koster, R.

    2004-01-01

    The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.

  16. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    NASA Astrophysics Data System (ADS)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  17. Outline of a new approach to the analysis of complex systems and decision processes.

    NASA Technical Reports Server (NTRS)

    Zadeh, L. A.

    1973-01-01

    Development of a conceptual framework for dealing with systems which are too complex or too ill-defined to admit of precise quantitative analysis. The approach outlined is based on the premise that the key elements in human thinking are not numbers, but labels of fuzzy sets - i.e., classes of objects in which the transition from membership to nonmembership is gradual rather than abrupt. The approach in question has three main distinguishing features - namely, the use of so-called 'linguistic' variables in place of or in addition to numerical variables, the characterization of simple relations between variables by conditional fuzzy statements, and the characterization of complex relations by fuzzy algorithms.

  18. On Complex Networks Representation and Computation of Hydrologycal Quantities

    NASA Astrophysics Data System (ADS)

    Serafin, F.; Bancheri, M.; David, O.; Rigon, R.

    2017-12-01

    Water is our blue gold. Despite results of discovery-based science keep warning public opinion about the looming worldwide water crisis, water is still treated as a not worth taking resource. Could a different multi-scale perspective affect environmental decision-making more deeply? Can also a further pairing to a new graphical representation of processes interaction sway decision-making more effectively and public opinion consequently?This abstract introduces a complex networks driven way to represent catchments eco-hydrology and related flexible informatics to manage it. The representation is built upon mathematical category. A category is an algebraic structure that comprises "objects" linked by "arrows". It is an evolution of Petri Nets said Time Continuous Petri Nets (TCPN). It aims to display (water) budgets processes and catchment interactions using explicative and self-contained symbolism. The result improves readability of physical processes compared to current descriptions. The IT perspective hinges on the Object Modeling System (OMS) v3. The latter is a non-invasive flexible environmental modeling framework designed to support component-based model development. The implementation of a Directed Acyclic Graph (DAG) data structure, named Net3, has recently enhanced its flexibility. Net3 represents interacting systems as complex networks: vertices match up with any sort of time evolving quantity; edges correspond to their data (fluxes) interchange. It currently hosts JGrass-NewAge components, and those implementing travel time analysis of fluxes. Further bio-physical or management oriented components can be easily added.This talk introduces both graphical representation and related informatics exercising actual applications and examples.

  19. Complex organic matter in space: about the chemical composition of carriers of the Unidentified Infrared Bands (UIBs) and protoplanetary emission spectra recorded from certain astrophysical objects.

    PubMed

    Cataldo, Franco; Keheyan, Yeghis; Heymann, Dieter

    2004-02-01

    In this communication we present the basic concept that the pure PAHs (Polycyclic Aromatic Hydrocarbons) can be considered only the ideal carriers of the UIBs (Unidentified Infrared Bands), the emission spectra coming from a large variety of astronomical objects. Instead we have proposed that the carriers of UIBs and of protoplanetary nebulae (PPNe) emission spectra are much more complex molecular mixtures possessing also complex chemical structures comparable to certain petroleum fractions obtained from the petroleum refining processes. The demonstration of our proposal is based on the comparison between the emission spectra recorded from the protoplanetary nebulae (PPNe) IRAS 22272+ 5435 and the infrared absorption spectra of certain 'heavy' petroleum fractions. It is shown that the best match with the reference spectrum is achieved by highly aromatic petroleum fractions. It is shown that the selected petroleum fractions used in the present study are able to match the band pattern of anthracite coal. Coal has been proposed previously as a model for the PPNe and UIBs but presents some drawbacks which could be overcome by adopting the petroleum fractions as model for PPNe and UIBs in place of coal. A brief discussion on the formation of the petroleum-like fractions in PPNe objects is included.

  20. DYNECHARM++: a toolkit to simulate coherent interactions of high-energy charged particles in complex structures

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Guidi, Vincenzo

    2013-08-01

    A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.

  1. Analyse of socket-prosthesis-blunt complex for lower limb amputee using objective measure of patient's gait cycle.

    PubMed

    Rotariu, Mariana; Filep, R; Turnea, M; Ilea, M; Arotăriţei, D; Popescu, Marilena

    2015-01-01

    The prosthetic application is a highly complex process. Modeling and simulation of biomechanics processes in orthopedics is a certainly field of interest in current medical research. Optimization of socket in order to improve the quality of patient's life is a major objective in prosthetic rehabilitation. A variety of numerical methods for prosthetic application have been developed and studied. An objective method is proposed to evaluate the performance of a prosthetic patient according to surface pressure map over the residual limb. The friction coefficient due to various liners used in transtibial and transfemoral prosthesis is taken into account also. Creation of a bio-based modeling and mathematical simulation allows the design, construction and optimization of contact between the prosthesis cup and lack of functionality of the patient amputated considering the data collected and processed in real time and non-invasively. The von Mises stress distribution in muscle flap tissue at the bone ends shows a larger region subjected to elevated von Mises stresses in the muscle tissue underlying longer truncated bones. Finite element method was used to conduct a stress analysis and show the force distribution along the device. The results contribute to a better understanding the design of an optimized prosthesis that increase the patient's performance along with a god choice of liner, made by an appropriate material that fit better to a particular blunt. The study of prosthetic application is an exciting and important topic in research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical orthopedics.

  2. Child–Adult Differences in Using Dual-Task Paradigms to Measure Listening Effort

    PubMed Central

    Charles, Lauren M.; Ricketts, Todd A.

    2017-01-01

    Purpose The purpose of the project was to investigate the effects modifying the secondary task in a dual-task paradigm to measure objective listening effort. To be specific, the complexity and depth of processing were increased relative to a simple secondary task. Method Three dual-task paradigms were developed for school-age children. The primary task was word recognition. The secondary task was a physical response to a visual probe (simple task), a physical response to a complex probe (increased complexity), or word categorization (increased depth of processing). Sixteen adults (22–32 years, M = 25.4) and 22 children (9–17 years, M = 13.2) were tested using the 3 paradigms in quiet and noise. Results For both groups, manipulations of the secondary task did not affect word recognition performance. For adults, increasing depth of processing increased the calculated effect of noise; however, for children, results with the deep secondary task were the least stable. Conclusions Manipulations of the secondary task differentially affected adults and children. Consistent with previous findings, increased depth of processing enhanced paradigm sensitivity for adults. However, younger participants were more likely to demonstrate the expected effects of noise on listening effort using a secondary task that did not require deep processing. PMID:28346816

  3. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  4. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  5. Influence of early attentional modulation on working memory

    PubMed Central

    Gazzaley, Adam

    2011-01-01

    It is now established that attention influences working memory (WM) at multiple processing stages. This liaison between attention and WM poses several interesting empirical questions. Notably, does attention impact WM via its influences on early perceptual processing? If so, what are the critical factors at play in this attention-perception-WM interaction. I review recent data from our laboratory utilizing a variety of techniques (electroencephalography (EEG), functional MRI (fMRI) and transcranial magnetic stimulation (TMS)), stimuli (features and complex objects), novel experimental paradigms, and research populations (younger and older adults), which converge to support the conclusion that top-down modulation of visual cortical activity at early perceptual processing stages (100–200 ms after stimulus onset) impacts subsequent WM performance. Factors that affect attentional control at this stage include cognitive load, task practice, perceptual training, and aging. These developments highlight the complex and dynamic relationships among perception, attention, and memory. PMID:21184764

  6. Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach

    PubMed Central

    de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio

    2015-01-01

    Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653

  7. Lessons Learned from the 200 West Pump and Treatment Facility Construction Project at the US DOE Hanford Site - A Leadership for Energy and Environmental Design (LEED) Gold-Certified Facility - 13113

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorr, Kent A.; Freeman-Pollard, Jhivaun R.; Ostrom, Michael J.

    CH2M Hill Plateau Remediation Company (CHPRC) designed, constructed, commissioned, and began operation of the largest groundwater pump and treatment facility in the U.S. Department of Energy's (DOE) nationwide complex. This one-of-a-kind groundwater pump and treatment facility, located at the Hanford Nuclear Reservation Site (Hanford Site) in Washington State, was built to an accelerated schedule with American Recovery and Reinvestment Act (ARRA) funds. There were many contractual, technical, configuration management, quality, safety, and Leadership in Energy and Environmental Design (LEED) challenges associated with the design, procurement, construction, and commissioning of this $95 million, 52,000 ft groundwater pump and treatment facility tomore » meet DOE's mission objective of treating contaminated groundwater at the Hanford Site with a new facility by June 28, 2012. The project team's successful integration of the project's core values and green energy technology throughout design, procurement, construction, and start-up of this complex, first-of-its-kind Bio Process facility resulted in successful achievement of DOE's mission objective, as well as attainment of LEED GOLD certification (Figure 1), which makes this Bio Process facility the first non-administrative building in the DOE Office of Environmental Management complex to earn such an award. (authors)« less

  8. Development of Fully-Integrated Micromagnetic Actuator Technologies

    DTIC Science & Technology

    2015-07-13

    nonexistent because of certain design and fabrication challenges— primarily the inability to integrate high-performance, permanent - magnet ( magnetically ... efficiency necessary for certain applications. To enable the development of high-performance magnetic actuator technologies, the original research plan...developed permanent - magnet materials in more complex microfabrication process flows Objective 2: Design, model, and optimize a novel multi- magnet

  9. Planning the Fire Program for the Third Millennium

    Treesearch

    Richard A. Chase

    1987-01-01

    The fire program planner faces an increasingly complex task as diverse--and often contradictory--messages about objectives and constraints are received from political, administrative, budgetary, and social processes. Our principal challenge as we move into the 21st century is not one of looking for flashier technology to include in the planned fire program. Rather, we...

  10. Learning Boolean Networks in HepG2 cells using ToxCast High-Content Imaging Data (SOT annual meeting)

    EPA Science Inventory

    Cells adapt to their environment via homeostatic processes that are regulated by complex molecular networks. Our objective was to learn key elements of these networks in HepG2 cells using ToxCast High-content imaging (HCI) measurements taken over three time points (1, 24, and 72h...

  11. Complexity of Choice: Teachers' and Students' Experiences Implementing a Choice-Based Comprehensive School Health Model

    ERIC Educational Resources Information Center

    Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan

    2016-01-01

    Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…

  12. Development of High Data Rate Acoustic Multiple-Input/Multiple-Output Modems

    DTIC Science & Technology

    2015-09-30

    communication capabilities of underwater platforms and facilitate real-time adaptive operations in the ocean. OBJECTIVES The ...signaling at the transmitter and low-complexity time reversal processing at the receiver. APPROACH Underwater acoustic (UWA) communication is useful...digital communications in shallow water environments. The advancement has direct impacts on defense appliations since underwater acoustic modems

  13. The Myth of Rational Objectivity and Leadership: The Realities of a Hospital Merger from a CEO's Perspective

    ERIC Educational Resources Information Center

    Tobin, John H.

    2009-01-01

    Executive power and status depends on others' belief in the executive's capacity for control via rational decision-making, "by the numbers" and above the fray of day to day minutia. By exploring his own experience in the complex social dynamics of a long, complicated merger process--characterised by misunderstanding, incomplete…

  14. Selection of Educational Materials in the United States Public Schools.

    ERIC Educational Resources Information Center

    Institute for Educational Development, New York, NY.

    The objective of this study was to collect "baseline" data with which to examine a complex process in the educational system--the selection of educational materials. The first part of the study analyzes the statutes of the fifty states which bear upon selection and purchase of educational materials. The purpose of this analysis is to…

  15. Mapping fuels at multiple scales: landscape application of the fuel characteristic classification system.

    Treesearch

    D. McKenzie; C.L. Raymond; L.-K.B. Kellogg; R.A. Norheim; A.G. Andreu; A.C. Bayard; K.E. Kopper; E. Elman

    2007-01-01

    Fuel mapping is a complex and often multidisciplinary process, involving remote sensing, ground-based validation, statistical modeling, and knowledge-based systems. The scale and resolution of fuel mapping depend both on objectives and availability of spatial data layers. We demonstrate use of the Fuel Characteristic Classification System (FCCS) for fuel mapping at two...

  16. Risk assessment and adaptive runoff utilization in water resource system considering the complex relationship among water supply, electricity generation and environment

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Zeng, X.; Mo, L.; Chen, L.; Jiang, Z.; Feng, Z.; Yuan, L.; He, Z.

    2017-12-01

    Generally, the adaptive utilization and regulation of runoff in the source region of China's southwest rivers is classified as a typical multi-objective collaborative optimization problem. There are grim competitions and incidence relation in the subsystems of water supply, electricity generation and environment, which leads to a series of complex problems represented by hydrological process variation, blocked electricity output and water environment risk. Mathematically, the difficulties of multi-objective collaborative optimization focus on the description of reciprocal relationships and the establishment of evolving model of adaptive systems. Thus, based on the theory of complex systems science, this project tries to carry out the research from the following aspects: the changing trend of coupled water resource, the covariant factor and driving mechanism, the dynamic evolution law of mutual feedback dynamic process in the supply-generation-environment coupled system, the environmental response and influence mechanism of coupled mutual feedback water resource system, the relationship between leading risk factor and multiple risk based on evolutionary stability and dynamic balance, the transfer mechanism of multiple risk response with the variation of the leading risk factor, the multidimensional coupled feedback system of multiple risk assessment index system and optimized decision theory. Based on the above-mentioned research results, the dynamic method balancing the efficiency of multiple objectives in the coupled feedback system and optimized regulation model of water resources is proposed, and the adaptive scheduling mode considering the internal characteristics and external response of coupled mutual feedback system of water resource is established. In this way, the project can make a contribution to the optimal scheduling theory and methodology of water resource management under uncertainty in the source region of Southwest River.

  17. Abstraction of information in repository performance assessments. Examples from the SKI project Site-94

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dverstorp, B.; Andersson, J.

    1995-12-01

    Performance Assessment of a nuclear waste repository implies an analysis of a complex system with many interacting processes. Even if some of these processes may be known to large detail, problems arise when combining all information, and means of abstracting information from complex detailed models into models that couple different processes are needed. Clearly, one of the major objectives of performance assessment, to calculate doses or other performance indicators, implies an enormous abstraction of information compared to all information that is used as input. Other problems are that the knowledge of different parts or processes is strongly variable and adjustments,more » interpretations, are needed when combining models from different disciplines. In addition, people as well as computers, even today, always have a limited capacity to process information and choices have to be made. However, because abstraction of information clearly is unavoidable in performance assessment the validity of choices made, always need to be scrutinized and judgements made need to be updated in an iterative process.« less

  18. On improved understanding of plasma-chemical processes in complex low-temperature plasmas

    NASA Astrophysics Data System (ADS)

    Röpcke, Jürgen; Loffhagen, Detlef; von Wahl, Eric; Nave, Andy S. C.; Hamann, Stephan; van Helden, Jean-Piere H.; Lang, Norbert; Kersten, Holger

    2018-05-01

    Over the last years, chemical sensing using optical emission spectroscopy (OES) in the visible spectral range has been combined with methods of mid infrared laser absorption spectroscopy (MIR-LAS) in the molecular fingerprint region from 3 to 20 μm, which contains strong rotational-vibrational absorption bands of a large variety of gaseous species. This optical approach established powerful in situ diagnostic tools to study plasma-chemical processes of complex low-temperature plasmas. The methods of MIR-LAS enable to detect stable and transient molecular species in ground and excited states and to measure the concentrations and temperatures of reactive species in plasmas. Since kinetic processes are inherent to discharges ignited in molecular gases, high time resolution on sub-second timescales is frequently desired for fundamental studies as well as for process monitoring in applied research and industry. In addition to high sensitivity and good temporal resolution, the capacity for broad spectral coverage enabling multicomponent detection is further expanding the use of OES and MIR-LAS techniques. Based on selected examples, this paper reports on recent achievements in the understanding of complex low-temperature plasmas. Recently, a link with chemical modeling of the plasma has been provided, which is the ultimate objective for a better understanding of the chemical and reaction kinetic processes occurring in the plasma. Contribution to the Topical Issue "Fundamentals of Complex Plasmas", edited by Jürgen Meichsner, Michael Bonitz, Holger Fehske, Alexander Piel.

  19. Ultra Rapid Object Categorization: Effects of Level, Animacy and Context

    PubMed Central

    Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred

    2013-01-01

    It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results. PMID:23840810

  20. Ultra rapid object categorization: effects of level, animacy and context.

    PubMed

    Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred

    2013-01-01

    It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results.

  1. Photogrammetry and Its Potential Application in Medical Science on the Basis of Selected Literature.

    PubMed

    Ey-Chmielewska, Halina; Chruściel-Nogalska, Małgorzata; Frączak, Bogumiła

    2015-01-01

    Photogrammetry is a science and technology which allows quantitative traits to be determined, i.e. the reproduction of object shapes, sizes and positions on the basis of their photographs. Images can be recorded in a wide range of wavelengths of electromagnetic radiation. The most common is the visible range, but near- and medium-infrared, thermal infrared, microwaves and X-rays are also used. The importance of photogrammetry has increased with the development of computer software. Digital image processing and real-time measurement have allowed the automation of many complex manufacturing processes. Photogrammetry has been widely used in many areas, especially in geodesy and cartography. In medicine, this method is used for measuring the widely understood human body for the planning and monitoring of therapeutic treatment and its results. Digital images obtained from optical-electronic sensors combined with computer technology have the potential of objective measurement thanks to the remote nature of the data acquisition, with no contact with the measured object and with high accuracy. Photogrammetry also allows the adoption of common standards for archiving and processing patient data.

  2. Human brain regions involved in recognizing environmental sounds.

    PubMed

    Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A

    2004-09-01

    To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.

  3. IT security evaluation - “hybrid” approach and risk of its implementation

    NASA Astrophysics Data System (ADS)

    Livshitz, I. I.; Neklyudov, A. V.; Lontsikh, P. A.

    2018-05-01

    It is relevant to evolve processes of evaluation of the IT security nowadays. Creating and application of the common evaluation approaches for an IT component, which are processed by the governmental and civil organizations, are still not solving problem. It is suggested to create a more precise and complex assessment tool for an IT security – the “hybrid” method of the IT security evaluation for a particular object, which is based on a range of adequate assessment tools.

  4. Cognitive learning: a machine learning approach for automatic process characterization from design

    NASA Astrophysics Data System (ADS)

    Foucher, J.; Baderot, J.; Martinez, S.; Dervilllé, A.; Bernard, G.

    2018-03-01

    Cutting edge innovation requires accurate and fast process-control to obtain fast learning rate and industry adoption. Current tools available for such task are mainly manual and user dependent. We present in this paper cognitive learning, which is a new machine learning based technique to facilitate and to speed up complex characterization by using the design as input, providing fast training and detection time. We will focus on the machine learning framework that allows object detection, defect traceability and automatic measurement tools.

  5. Artificial intelligence techniques for scheduling Space Shuttle missions

    NASA Technical Reports Server (NTRS)

    Henke, Andrea L.; Stottler, Richard H.

    1994-01-01

    Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.

  6. Design of Gages for Direct Skin Friction Measurements in Complex Turbulent Flows with Shock Impingement Compensation

    DTIC Science & Technology

    2007-06-07

    100 kW/m2 for 0.1 s. Along with the material change, an oil leak problem required a geometric change. Initially, we considered TIG welding or...shear and moment, is addressed through the design, development, and testing of the CF1 and CF2 gages. Chapter 3 presents the evolutionary process ...a shock. Chapter 4 examines the performance of each gage to the nominal load conditions. Through this process , objective 2 is met. The best

  7. Method for self reconstruction of holograms for secure communication

    NASA Astrophysics Data System (ADS)

    Babcock, Craig; Donkor, Eric

    2017-05-01

    We present the theory and experimental results behind using a 3D holographic signal for secure communications. A hologram of a complex 3D object is recorded to be used as a hard key for data encryption and decryption. The hologram is cut in half to be used at each end of the system. One piece is used for data encryption, while the other is used for data decryption. The first piece of hologram is modulated with the data to be encrypted. The hologram has an extremely complex phase distribution which encodes the data signal incident on the first piece of hologram. In order to extract the data from the modulated holographic carrier, the signal must be passed through the second hologram, removing the complex phase contributions of the first hologram. The signal beam from the first piece of hologram is used to illuminate the second piece of the same hologram, creating a self-reconstructing system. The 3D hologram's interference pattern is highly specific to the 3D object and conditions during the holographic writing process. With a sufficiently complex 3D object used to generate the holographic hard key, the data will be nearly impossible to recover without using the second piece of the same hologram. This method of producing a self-reconstructing hologram ensures that the pieces in use are from the same original hologram, providing a system hard key, making it an extremely difficult system to counterfeit.

  8. Near Earth Objects and Cascading Effects from the Policy Perspective: Implications from Problem and Solution Definition

    NASA Astrophysics Data System (ADS)

    Lindquist, Eric

    2016-04-01

    The characterization of near-Earth-objects (NEOs) in regard to physical attributes and potential risk and impact factors presents a complex and complicates scientific and engineering challenge. The societal and policy risks and impacts are no less complex, yet are rarely considered in the same context as material properties or related factors. Further, NEO impacts are typically considered as discrete events, not as initial events in a dynamic cascading system. The objective of this contribution is to position the characterization of NEOs within the public policy process domain as a means to reflect on the science-policy nexus in regard to risks and multi-hazard impacts associated with these hazards. This will be accomplished through, first, a brief overview of the science-policy nexus, followed by a discussion of policy process frameworks, such as agenda setting and the multiple streams model, focusing events, and punctuated equilibrium, and their application and appropriateness to the problem of NEOs. How, too, for example, does NEO hazard and risk compare with other low probability, high risk, hazards in regard to public policy? Finally, we will reflect on the implications of alternative NEO "solutions" and the characterization of the NEO "problem," and the political and public acceptance of policy alternatives as a way to link NEO science and policy in the context of the overall NH9.12 panel.

  9. Boundary Conditions for the Paleoenvironment: Chemical and Physical Processes in the Pre-Solar Nebula

    NASA Technical Reports Server (NTRS)

    Irvine, William M.; Schloerb, F. Peter

    1997-01-01

    The basic theme of this program is the study of molecular complexity and evolution in interstellar clouds and in primitive solar system objects. Research has included the detection and study of a number of new interstellar molecules and investigation of reaction pathways for astrochemistry from a comparison of theory and observed molecular abundances. The latter includes studies of cold, dark clouds in which ion-molecule chemistry should predominate, searches for the effects of interchange of material between the gas and solid phases in interstellar clouds, unbiased spectral surveys of particular sources, and systematic investigation of the interlinked chemistry and physics of dense interstellar clouds. In addition, the study of comets has allowed a comparison between the chemistry of such minimally thermally processed objects and that of interstellar clouds, shedding light on the evolution of the biogenic elements during the process of solar system formation.

  10. ARK: Autonomous mobile robot in an industrial environment

    NASA Technical Reports Server (NTRS)

    Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.

    1994-01-01

    This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.

  11. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  12. Second Generation Crop Yield Models Review

    NASA Technical Reports Server (NTRS)

    Hodges, T. (Principal Investigator)

    1982-01-01

    Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.

  13. Design of virtual simulation experiment based on key events

    NASA Astrophysics Data System (ADS)

    Zhong, Zheng; Zhou, Dongbo; Song, Lingxiu

    2018-06-01

    Considering complex content and lacking of guidance in virtual simulation experiments, the key event technology in VR narrative theory was introduced for virtual simulation experiment to enhance fidelity and vividness process. Based on the VR narrative technology, an event transition structure was designed to meet the need of experimental operation process, and an interactive event processing model was used to generate key events in interactive scene. The experiment of" margin value of bees foraging" based on Biologic morphology was taken as an example, many objects, behaviors and other contents were reorganized. The result shows that this method can enhance the user's experience and ensure experimental process complete and effectively.

  14. Aerial surveillance based on hierarchical object classification for ground target detection

    NASA Astrophysics Data System (ADS)

    Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo

    2015-03-01

    Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.

  15. Fast and flexible 3D object recognition solutions for machine vision applications

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kühnle, Jens; Verl, Alexander

    2013-03-01

    In automation and handling engineering, supplying work pieces between different stages along the production process chain is of special interest. Often the parts are stored unordered in bins or lattice boxes and hence have to be separated and ordered for feeding purposes. An alternative to complex and spacious mechanical systems such as bowl feeders or conveyor belts, which are typically adapted to the parts' geometry, is using a robot to grip the work pieces out of a bin or from a belt. Such applications are in need of reliable and precise computer-aided object detection and localization systems. For a restricted range of parts, there exists a variety of 2D image processing algorithms that solve the recognition problem. However, these methods are often not well suited for the localization of randomly stored parts. In this paper we present a fast and flexible 3D object recognizer that localizes objects by identifying primitive features within the objects. Since technical work pieces typically consist to a substantial degree of geometric primitives such as planes, cylinders and cones, such features usually carry enough information in order to determine the position of the entire object. Our algorithms use 3D best-fitting combined with an intelligent data pre-processing step. The capability and performance of this approach is shown by applying the algorithms to real data sets of different industrial test parts in a prototypical bin picking demonstration system.

  16. Towards a unified theory of health-disease: II. Holopathogenesis

    PubMed Central

    Almeida-Filho, Naomar

    2014-01-01

    This article presents a systematic framework for modeling several classes of illness-sickness-disease named as Holopathogenesis. Holopathogenesis is defined as processes of over-determination of diseases and related conditions taken as a whole, comprising selected facets of the complex object Health. First, a conceptual background of Holopathogenesis is presented as a series of significant interfaces (biomolecular-immunological, physiopathological-clinical, epidemiological-ecosocial). Second, propositions derived from Holopathogenesis are introduced in order to allow drawing the disease-illness-sickness complex as a hierarchical network of networks. Third, a formalization of intra- and inter-level correspondences, over-determination processes, effects and links of Holopathogenesis models is proposed. Finally, the Holopathogenesis frame is evaluated as a comprehensive theoretical pathology taken as a preliminary step towards a unified theory of health-disease. PMID:24897040

  17. Developing a complex intervention for diet and activity behaviour change in obese pregnant women (the UPBEAT trial); assessment of behavioural change and process evaluation in a pilot randomised controlled trial.

    PubMed

    Poston, Lucilla; Briley, Annette L; Barr, Suzanne; Bell, Ruth; Croker, Helen; Coxon, Kirstie; Essex, Holly N; Hunt, Claire; Hayes, Louise; Howard, Louise M; Khazaezadeh, Nina; Kinnunen, Tarja; Nelson, Scott M; Oteng-Ntim, Eugene; Robson, Stephen C; Sattar, Naveed; Seed, Paul T; Wardle, Jane; Sanders, Thomas A B; Sandall, Jane

    2013-07-15

    Complex interventions in obese pregnant women should be theoretically based, feasible and shown to demonstrate anticipated behavioural change prior to inception of large randomised controlled trials (RCTs). The aim was to determine if a) a complex intervention in obese pregnant women leads to anticipated changes in diet and physical activity behaviours, and b) to refine the intervention protocol through process evaluation of intervention fidelity. We undertook a pilot RCT of a complex intervention in obese pregnant women, comparing routine antenatal care with an intervention to reduce dietary glycaemic load and saturated fat intake, and increase physical activity. Subjects included 183 obese pregnant women (mean BMI 36.3 kg/m2). Compared to women in the control arm, women in the intervention arm had a significant reduction in dietary glycaemic load (33 points, 95% CI -47 to -20), (p < 0.001) and saturated fat intake (-1.6% energy, 95% CI -2.8 to -0. 3) at 28 weeks' gestation. Objectively measured physical activity did not change. Physical discomfort and sustained barriers to physical activity were common at 28 weeks' gestation. Process evaluation identified barriers to recruitment, group attendance and compliance, leading to modification of intervention delivery. This pilot trial of a complex intervention in obese pregnant women suggests greater potential for change in dietary intake than for change in physical activity, and through process evaluation illustrates the considerable advantage of performing an exploratory trial of a complex intervention in obese pregnant women before undertaking a large RCT. ISRCTN89971375.

  18. A conceptual lemon: theta burst stimulation to the left anterior temporal lobe untangles object representation and its canonical color.

    PubMed

    Chiou, Rocco; Sowman, Paul F; Etchell, Andrew C; Rich, Anina N

    2014-05-01

    Object recognition benefits greatly from our knowledge of typical color (e.g., a lemon is usually yellow). Most research on object color knowledge focuses on whether both knowledge and perception of object color recruit the well-established neural substrates of color vision (the V4 complex). Compared with the intensive investigation of the V4 complex, we know little about where and how neural mechanisms beyond V4 contribute to color knowledge. The anterior temporal lobe (ATL) is thought to act as a "hub" that supports semantic memory by integrating different modality-specific contents into a meaningful entity at a supramodal conceptual level, making it a good candidate zone for mediating the mappings between object attributes. Here, we explore whether the ATL is critical for integrating typical color with other object attributes (object shape and name), akin to its role in combining nonperceptual semantic representations. In separate experimental sessions, we applied TMS to disrupt neural processing in the left ATL and a control site (the occipital pole). Participants performed an object naming task that probes color knowledge and elicits a reliable color congruency effect as well as a control quantity naming task that also elicits a cognitive congruency effect but involves no conceptual integration. Critically, ATL stimulation eliminated the otherwise robust color congruency effect but had no impact on the numerical congruency effect, indicating a selective disruption of object color knowledge. Neither color nor numerical congruency effects were affected by stimulation at the control occipital site, ruling out nonspecific effects of cortical stimulation. Our findings suggest that the ATL is involved in the representation of object concepts that include their canonical colors.

  19. Three-dimensional (3D) printing and its applications for aortic diseases

    PubMed Central

    Hangge, Patrick; Pershad, Yash; Witting, Avery A.; Albadawi, Hassan

    2018-01-01

    Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases. PMID:29850416

  20. The Effects of Similarity on High-Level Visual Working Memory Processing.

    PubMed

    Yang, Li; Mo, Lei

    2017-01-01

    Similarity has been observed to have opposite effects on visual working memory (VWM) for complex images. How can these discrepant results be reconciled? To answer this question, we used a change-detection paradigm to test visual working memory performance for multiple real-world objects. We found that working memory for moderate similarity items was worse than that for either high or low similarity items. This pattern was unaffected by manipulations of stimulus type (faces vs. scenes), encoding duration (limited vs. self-paced), and presentation format (simultaneous vs. sequential). We also found that the similarity effects differed in strength in different categories (scenes vs. faces). These results suggest that complex real-world objects are represented using a centre-surround inhibition organization . These results support the category-specific cortical resource theory and further suggest that centre-surround inhibition organization may differ by category.

  1. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  2. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  3. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  4. Multi-objective engineering design using preferences

    NASA Astrophysics Data System (ADS)

    Sanchis, J.; Martinez, M.; Blasco, X.

    2008-03-01

    System design is a complex task when design parameters have to satisy a number of specifications and objectives which often conflict with those of others. This challenging problem is called multi-objective optimization (MOO). The most common approximation consists in optimizing a single cost index with a weighted sum of objectives. However, once weights are chosen the solution does not guarantee the best compromise among specifications, because there is an infinite number of solutions. A new approach can be stated, based on the designer's experience regarding the required specifications and the associated problems. This valuable information can be translated into preferences for design objectives, and will lead the search process to the best solution in terms of these preferences. This article presents a new method, which enumerates these a priori objective preferences. As a result, a single objective is built automatically and no weight selection need be performed. Problems occuring because of the multimodal nature of the generated single cost index are managed with genetic algorithms (GAs).

  5. Classifying Structures in the ISM with Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher; Goodman, A. A.; Williams, J. P.

    2011-01-01

    The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.

  6. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  7. DAMT - DISTRIBUTED APPLICATION MONITOR TOOL (HP9000 VERSION)

    NASA Technical Reports Server (NTRS)

    Keith, B.

    1994-01-01

    Typical network monitors measure status of host computers and data traffic among hosts. A monitor to collect statistics about individual processes must be unobtrusive and possess the ability to locate and monitor processes, locate and monitor circuits between processes, and report traffic back to the user through a single application program interface (API). DAMT, Distributed Application Monitor Tool, is a distributed application program that will collect network statistics and make them available to the user. This distributed application has one component (i.e., process) on each host the user wishes to monitor as well as a set of components at a centralized location. DAMT provides the first known implementation of a network monitor at the application layer of abstraction. Potential users only need to know the process names of the distributed application they wish to monitor. The tool locates the processes and the circuit between them, and reports any traffic between them at a user-defined rate. The tool operates without the cooperation of the processes it monitors. Application processes require no changes to be monitored by this tool. Neither does DAMT require the UNIX kernel to be recompiled. The tool obtains process and circuit information by accessing the operating system's existing process database. This database contains all information available about currently executing processes. Expanding the information monitored by the tool can be done by utilizing more information from the process database. Traffic on a circuit between processes is monitored by a low-level LAN analyzer that has access to the raw network data. The tool also provides features such as dynamic event reporting and virtual path routing. A reusable object approach was used in the design of DAMT. The tool has four main components; the Virtual Path Switcher, the Central Monitor Complex, the Remote Monitor, and the LAN Analyzer. All of DAMT's components are independent, asynchronously executing processes. The independent processes communicate with each other via UNIX sockets through a Virtual Path router, or Switcher. The Switcher maintains a routing table showing the host of each component process of the tool, eliminating the need for each process to do so. The Central Monitor Complex provides the single application program interface (API) to the user and coordinates the activities of DAMT. The Central Monitor Complex is itself divided into independent objects that perform its functions. The component objects are the Central Monitor, the Process Locator, the Circuit Locator, and the Traffic Reporter. Each of these objects is an independent, asynchronously executing process. User requests to the tool are interpreted by the Central Monitor. The Process Locator identifies whether a named process is running on a monitored host and which host that is. The circuit between any two processes in the distributed application is identified using the Circuit Locator. The Traffic Reporter handles communication with the LAN Analyzer and accumulates traffic updates until it must send a traffic report to the user. The Remote Monitor process is replicated on each monitored host. It serves the Central Monitor Complex processes with application process information. The Remote Monitor process provides access to operating systems information about currently executing processes. It allows the Process Locator to find processes and the Circuit Locator to identify circuits between processes. It also provides lifetime information about currently monitored processes. The LAN Analyzer consists of two processes. Low-level monitoring is handled by the Sniffer. The Sniffer analyzes the raw data on a single, physical LAN. It responds to commands from the Analyzer process, which maintains the interface to the Traffic Reporter and keeps track of which circuits to monitor. DAMT is written in C-language for HP-9000 series computers running HP-UX and Sun 3 and 4 series computers running SunOS. DAMT requires 1Mb of disk space and 4Mb of RAM for execution. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. The HP-9000 version (GSC-13589) includes sample HP-9000/375 and HP-9000/730 executables which were compiled under HP-UX, and the Sun version (GSC-13559) includes sample Sun3 and Sun4 executables compiled under SunOS. The standard distribution medium for the HP version of DAMT is a .25 inch HP pre-formatted streaming magnetic tape cartridge in UNIX tar format. It is also available on a 4mm magnetic tape in UNIX tar format. The standard distribution medium for the Sun version of DAMT is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. DAMT was developed in 1992.

  8. Shape and color conjunction stimuli are represented as bound objects in visual working memory.

    PubMed

    Luria, Roy; Vogel, Edward K

    2011-05-01

    The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  10. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  11. The role of temporo-parietal junction (TPJ) in global Gestalt perception.

    PubMed

    Huberle, Elisabeth; Karnath, Hans-Otto

    2012-07-01

    Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.

  12. Experimental and Numerical Simulations of Phase Transformations Occurring During Continuous Annealing of DP Steel Strips

    NASA Astrophysics Data System (ADS)

    Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej

    2016-04-01

    Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.

  13. Modern Paradigm of Star Formation in the Galaxy

    NASA Astrophysics Data System (ADS)

    Sobolev, A. M.

    2017-06-01

    Understanding by the scientific community of the star formation processes in the Galaxy undergone significant changes in recent years. This is largely due to the development of the observational basis of astronomy in the infrared and submillimeter ranges. Analysis of new observational data obtained in the course of the Herschel project, by radio interferometer ALMA and other modern facilities significantly advanced our understanding of the structure of the regions of star formation, young stellar object vicinities and provided comprehensive data on the mass function of proto-stellar objects in a number of star-forming complexes of the Galaxy. Mapping of the complexes in molecular radio lines allowed to study their spatial and kinematic structure on the spatial scales of tens and hundreds of parsecs. The next breakthrough in this field can be achieved as a result of the planned project “Spektr-MM” (Millimetron) which implies a significant improvement in angular resolution and sensitivity. The use of sensitive interferometers allowed to investigate the details of star formation processes at small spatial scales - down to the size of the solar system (with the help of the ALMA), and even the Sun (in the course of the space project “Spektr-R” = RadioAstron). Significant contribution to the study of the processes of accretion is expected as a result of the project “Spektr-UV” (WSO-UV = “World Space Observatory - Ultraviolet”). Complemented with significant theoretical achievements obtained observational data have greatly promoted our understanding of the star formation processes.

  14. Pharmacometric Models for Characterizing the Pharmacokinetics of Orally Inhaled Drugs.

    PubMed

    Borghardt, Jens Markus; Weber, Benjamin; Staab, Alexander; Kloft, Charlotte

    2015-07-01

    During the last decades, the importance of modeling and simulation in clinical drug development, with the goal to qualitatively and quantitatively assess and understand mechanisms of pharmacokinetic processes, has strongly increased. However, this increase could not equally be observed for orally inhaled drugs. The objectives of this review are to understand the reasons for this gap and to demonstrate the opportunities that mathematical modeling of pharmacokinetics of orally inhaled drugs offers. To achieve these objectives, this review (i) discusses pulmonary physiological processes and their impact on the pharmacokinetics after drug inhalation, (ii) provides a comprehensive overview of published pharmacokinetic models, (iii) categorizes these models into physiologically based pharmacokinetic (PBPK) and (clinical data-derived) empirical models, (iv) explores both their (mechanistic) plausibility, and (v) addresses critical aspects of different pharmacometric approaches pertinent for drug inhalation. In summary, pulmonary deposition, dissolution, and absorption are highly complex processes and may represent the major challenge for modeling and simulation of PK after oral drug inhalation. Challenges in relating systemic pharmacokinetics with pulmonary efficacy may be another factor contributing to the limited number of existing pharmacokinetic models for orally inhaled drugs. Investigations comprising in vitro experiments, clinical studies, and more sophisticated mathematical approaches are considered to be necessary for elucidating these highly complex pulmonary processes. With this additional knowledge, the PBPK approach might gain additional attractiveness. Currently, (semi-)mechanistic modeling offers an alternative to generate and investigate hypotheses and to more mechanistically understand the pulmonary and systemic pharmacokinetics after oral drug inhalation including the impact of pulmonary diseases.

  15. Hypnosis for complex trauma survivors: four case studies.

    PubMed

    Poon, Maggie Wai-ling

    2009-01-01

    This report described a phased-oriented treatment of complex trauma in four Chinese women. Two women were survivors of childhood sexual abuse, one was a rape victim, and the other was a battered spouse. A phased-oriented treatment that tailored to the needs of the clients was used. The treatment framework consisted of three phases: stabilization, trauma processing, and integration. Hypnotic techniques had been used in these phases as means for grounding and stabilization, for accessing the traumatic memories, and for consolidating the gains. Data from self-reports, observation and objective measures indicates a significant reduction in the trauma symptoms after treatment.

  16. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  17. What do we gain from simplicity versus complexity in species distribution models?

    USGS Publications Warehouse

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

  18. Discovering Tradeoffs, Vulnerabilities, and Dependencies within Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P. M.

    2015-12-01

    There is a growing recognition and interest in using emerging computational tools for discovering the tradeoffs that emerge across complex combinations infrastructure options, adaptive operations, and sign posts. As a field concerned with "deep uncertainties", it is logically consistent to include a more direct acknowledgement that our choices for dealing with computationally demanding simulations, advanced search algorithms, and sensitivity analysis tools are themselves subject to failures that could adversely bias our understanding of how systems' vulnerabilities change with proposed actions. Balancing simplicity versus complexity in our computational frameworks is nontrivial given that we are often exploring high impact irreversible decisions. It is not always clear that accepted models even encompass important failure modes. Moreover as they become more complex and computationally demanding the benefits and consequences of simplifications are often untested. This presentation discusses our efforts to address these challenges through our "many-objective robust decision making" (MORDM) framework for the design and management water resources systems. The MORDM framework has four core components: (1) elicited problem conception and formulation, (2) parallel many-objective search, (3) interactive visual analytics, and (4) negotiated selection of robust alternatives. Problem conception and formulation is the process of abstracting a practical design problem into a mathematical representation. We build on the emerging work in visual analytics to exploit interactive visualization of both the design space and the objective space in multiple heterogeneous linked views that permit exploration and discovery. Many-objective search produces tradeoff solutions from potentially competing problem formulations that can each consider up to ten conflicting objectives based on current computational search capabilities. Negotiated design selection uses interactive visualization, reformulation, and optimization to discover desirable designs for implementation. Multi-city urban water supply portfolio planning will be used to illustrate the MORDM framework.

  19. Process Consistency in Models: the Importance of System Signatures, Expert Knowledge and Process Complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert

    2014-05-01

    Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.

  20. Role-Playing Games for Capacity Building in Water and Land Management: Some Brazilian Experiences

    ERIC Educational Resources Information Center

    Camargo, Maria Eugenia; Jacobi, Pedro Roberto; Ducrot, Raphaele

    2007-01-01

    Role-playing games in natural resource management are currently being tested as research, training, and intervention tools all over the world. Various studies point out their potential to deal with complex issues and to contribute to training processes. The objective of this contribution is to analyze the limits and potentialities of this tool for…

  1. Beneath the Tip of the Iceberg: Exploring the Multiple Forms of University-Industry Linkages

    ERIC Educational Resources Information Center

    Ramos-Vielba, Irene; Fernandez-Esquinas, Manuel

    2012-01-01

    This article focuses on the wide variety of channels through which the process of knowledge transfer occurs. The overall objective is to show the complexity of relationships between researchers and firms in a university system, and to identify some specific factors that influence such interactions. Our case study involves a face-to-face survey of…

  2. The microorganisms used for working in microbial fuel cells

    NASA Astrophysics Data System (ADS)

    Konovalova, E. Yu.; Stom, D. I.; Zhdanova, G. O.; Yuriev, D. A.; Li, Youming; Barbora, Lepakshi; Goswami, Pranab

    2018-04-01

    Investigated the use as biological object in microbial fuel cells (MFC) of various microorganisms performing the transport of electrons in the processing of various substrates. Most MFC, uses complex substrates. Such MFC filled with associations of microorganisms. The article deals with certain types of microorganisms for use in the MFC, shows the characteristics of molecular electron transfer mechanisms microorganisms into the environment.

  3. Social scientist's viewpoint on conflict management

    USGS Publications Warehouse

    Ertel, Madge O.

    1990-01-01

    Social scientists can bring to the conflict-management process objective, reliable information needed to resolve increasingly complex issues. Engineers need basic training in the principles of the social sciences and in strategies for public involvement. All scientists need to be sure that that the information they provide is unbiased by their own value judgments and that fair standards and open procedures govern its use.

  4. Art appreciation and aesthetic feeling as objects of explanation.

    PubMed

    Hogan, Patrick Colm

    2013-04-01

    The target article presents a thought-provoking approach to the relation of neuroscience and art. However, at least two issues pose potential difficulties. The first concerns whether "art appreciation" is a coherent topic for scientific study. The second concerns the degree to which processing fluency can explain aesthetic feeling or may simply be one component of a more complex account.

  5. Mapping out the ICT Integration Terrain in the School Context: Identifying the Challenges in an Innovative Project

    ERIC Educational Resources Information Center

    Judge, Miriam

    2013-01-01

    This article discusses the research findings from the start-up phase of an innovative information and communication technology (ICT) project focused on ICT integration as a complex process involving many factors such as leadership, school readiness and organisational culture. Known locally as Hermes, the project's core objective was to provide an…

  6. Numerical simulations of water flow and tracer transport in soils at the USDA-ARS Beltsville OPE3 field site

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to develop a realistic model to simulate the complex processes of flow and tracer transport in USDA-ARS OPE3 field site and to compare simulation results with the detailed monitoring observations. The site has been studied for over 10 years with the extensive availabl...

  7. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Treesearch

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  8. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  9. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  10. Characterization of NEOs from the Policy Perspective: Implications from Problem and Solution Definitions

    NASA Astrophysics Data System (ADS)

    Lindquist, E.

    2015-12-01

    The characterization of near-Earth-objects (NEOs) in regard to physical attributes and potential risk and impact factors presents a complex and complicates scientific and engineering challenge. The societal and policy risks and impacts are no less complex, yet are rarely considered in the same context as material properties or related factors. The objective of this contribution is to position the characterization of NEOs within the public policy process domain as a means to reflect on the science-policy nexus in regard to risks associated with NEOs. This will be accomplished through, first, a brief overview of the science-policy nexus, followed by a discussion of several policy process frameworks, such as agenda setting and the multiple streams model, focusing events, and punctuated equilibrium, and their application and appropriateness to the problem of NEOs. How, too, for example, does NEO hazard and risk compare with other low probability, high risk, hazards in regard to public policy? Finally, we will reflect on the implications of alternative NEO "solutions" and the characterization of the NEO "problem," and the political and public acceptance of policy alternatives as a way to link NEO science and policy in the context of the overall NH004 panel.

  11. Solving the Big Data (BD) Problem in Advanced Manufacturing (Subcategory for work done at Georgia Tech. Study Process and Design Factors for Additive Manufacturing Improvement)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Brett W.; Diaz, Kimberly A.; Ochiobi, Chinaza Darlene

    2015-09-01

    3D printing originally known as additive manufacturing is a process of making 3 dimensional solid objects from a CAD file. This ground breaking technology is widely used for industrial and biomedical purposes such as building objects, tools, body parts and cosmetics. An important benefit of 3D printing is the cost reduction and manufacturing flexibility; complex parts are built at the fraction of the price. However, layer by layer printing of complex shapes adds error due to the surface roughness. Any such error results in poor quality products with inaccurate dimensions. The main purpose of this research is to measure themore » amount of printing errors for parts with different geometric shapes and to analyze them for finding optimal printing settings to minimize the error. We use a Design of Experiments framework, and focus on studying parts with cone and ellipsoid shapes. We found that the orientation and the shape of geometric shapes have significant effect on the printing error. From our analysis, we also determined the optimal orientation that gives the least printing error.« less

  12. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  13. State machine analysis of sensor data from dynamic processes

    DOEpatents

    Cook, William R.; Brabson, John M.; Deland, Sharon M.

    2003-12-23

    A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.

  14. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  15. Early differential sensitivity of evoked-potentials to local and global shape during the perception of three-dimensional objects.

    PubMed

    Leek, E Charles; Roberts, Mark; Oliver, Zoe J; Cristino, Filipe; Pegna, Alan J

    2016-08-01

    Here we investigated the time course underlying differential processing of local and global shape information during the perception of complex three-dimensional (3D) objects. Observers made shape matching judgments about pairs of sequentially presented multi-part novel objects. Event-related potentials (ERPs) were used to measure perceptual sensitivity to 3D shape differences in terms of local part structure and global shape configuration - based on predictions derived from hierarchical structural description models of object recognition. There were three types of different object trials in which stimulus pairs (1) shared local parts but differed in global shape configuration; (2) contained different local parts but shared global configuration or (3) shared neither local parts nor global configuration. Analyses of the ERP data showed differential amplitude modulation as a function of shape similarity as early as the N1 component between 146-215ms post-stimulus onset. These negative amplitude deflections were more similar between objects sharing global shape configuration than local part structure. Differentiation among all stimulus types was reflected in N2 amplitude modulations between 276-330ms. sLORETA inverse solutions showed stronger involvement of left occipitotemporal areas during the N1 for object discrimination weighted towards local part structure. The results suggest that the perception of 3D object shape involves parallel processing of information at local and global scales. This processing is characterised by relatively slow derivation of 'fine-grained' local shape structure, and fast derivation of 'coarse-grained' global shape configuration. We propose that the rapid early derivation of global shape attributes underlies the observed patterns of N1 amplitude modulations. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Automated Rapid Prototyping of 3D Ceramic Parts

    NASA Technical Reports Server (NTRS)

    McMillin, Scott G.; Griffin, Eugene A.; Griffin, Curtis W.; Coles, Peter W. H.; Engle, James D.

    2005-01-01

    An automated system of manufacturing equipment produces three-dimensional (3D) ceramic parts specified by computational models of the parts. The system implements an advanced, automated version of a generic rapid-prototyping process in which the fabrication of an object having a possibly complex 3D shape includes stacking of thin sheets, the outlines of which closely approximate the horizontal cross sections of the object at their respective heights. In this process, the thin sheets are made of a ceramic precursor material, and the stack is subsequently heated to transform it into a unitary ceramic object. In addition to the computer used to generate the computational model of the part to be fabricated, the equipment used in this process includes: 1) A commercially available laminated-object-manufacturing machine that was originally designed for building woodlike 3D objects from paper and was modified to accept sheets of ceramic precursor material, and 2) A machine designed specifically to feed single sheets of ceramic precursor material to the laminated-object-manufacturing machine. Like other rapid-prototyping processes that utilize stacking of thin sheets, this process begins with generation of the computational model of the part to be fabricated, followed by computational sectioning of the part into layers of predetermined thickness that collectively define the shape of the part. Information about each layer is transmitted to rapid-prototyping equipment, where the part is built layer by layer. What distinguishes this process from other rapid-prototyping processes that utilize stacking of thin sheets are the details of the machines and the actions that they perform. In this process, flexible sheets of ceramic precursor material (called "green" ceramic sheets) suitable for lamination are produced by tape casting. The binder used in the tape casting is specially formulated to enable lamination of layers with little or no applied heat or pressure. The tape is cut into individual sheets, which are stacked in the sheet-feeding machine until used. The sheet-feeding machine can hold enough sheets for about 8 hours of continuous operation.

  17. Improving CNN Performance Accuracies With Min-Max Objective.

    PubMed

    Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning

    2017-06-09

    We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

  18. Moving Object Detection Using a Parallax Shift Vector Algorithm

    NASA Astrophysics Data System (ADS)

    Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.

    2018-07-01

    There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.

  19. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Low-Thrust Mission Design

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Ghosh, Alexander R.

    2015-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on a hypothetical mission to the main asteroid belt.

  20. A Systematic Review of Conceptual Frameworks of Medical Complexity and New Model Development.

    PubMed

    Zullig, Leah L; Whitson, Heather E; Hastings, Susan N; Beadles, Chris; Kravchenko, Julia; Akushevich, Igor; Maciejewski, Matthew L

    2016-03-01

    Patient complexity is often operationalized by counting multiple chronic conditions (MCC) without considering contextual factors that can affect patient risk for adverse outcomes. Our objective was to develop a conceptual model of complexity addressing gaps identified in a review of published conceptual models. We searched for English-language MEDLINE papers published between 1 January 2004 and 16 January 2014. Two reviewers independently evaluated abstracts and all authors contributed to the development of the conceptual model in an iterative process. From 1606 identified abstracts, six conceptual models were selected. One additional model was identified through reference review. Each model had strengths, but several constructs were not fully considered: 1) contextual factors; 2) dynamics of complexity; 3) patients' preferences; 4) acute health shocks; and 5) resilience. Our Cycle of Complexity model illustrates relationships between acute shocks and medical events, healthcare access and utilization, workload and capacity, and patient preferences in the context of interpersonal, organizational, and community factors. This model may inform studies on the etiology of and changes in complexity, the relationship between complexity and patient outcomes, and intervention development to improve modifiable elements of complex patients.

  1. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  2. Beyond a series of security nets: Applying STAMP & STPA to port security

    DOE PAGES

    Williams, Adam D.

    2015-11-17

    Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less

  3. Beyond a series of security nets: Applying STAMP & STPA to port security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Adam D.

    Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less

  4. Material properties from contours: New insights on object perception.

    PubMed

    Pinna, Baingio; Deiana, Katia

    2015-10-01

    In this work we explored phenomenologically the visual complexity of the material attributes on the basis of the contours that define the boundaries of a visual object. The starting point is the rich and pioneering work done by Gestalt psychologists and, more in detail, by Rubin, who first demonstrated that contours contain most of the information related to object perception, like the shape, the color and the depth. In fact, by investigating simple conditions like those used by Gestalt psychologists, mostly consisting of contours only, we demonstrated that the phenomenal complexity of the material attributes emerges through appropriate manipulation of the contours. A phenomenological approach, analogous to the one used by Gestalt psychologists, was used to answer the following questions. What are contours? Which attributes can be phenomenally defined by contours? Are material properties determined only by contours? What is the visual syntactic organization of object attributes? The results of this work support the idea of a visual syntactic organization as a new kind of object formation process useful to understand the language of vision that creates well-formed attribute organizations. The syntax of visual attributes can be considered as a new way to investigate the modular coding and, more generally, the binding among attributes, i.e., the issue of how the brain represents the pairing of shape and material properties. Copyright © 2015. Published by Elsevier Ltd.

  5. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  6. Development of Three-Dimensional Completion of Complex Objects

    ERIC Educational Resources Information Center

    Soska, Kasey C.; Johnson, Scott P.

    2013-01-01

    Three-dimensional (3D) object completion, the ability to perceive the backs of objects seen from a single viewpoint, emerges at around 6 months of age. Yet, only relatively simple 3D objects have been used in assessing its development. This study examined infants' 3D object completion when presented with more complex stimuli. Infants…

  7. Decomposition-Based Decision Making for Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas K.; Mavris, DImitri N.

    2005-01-01

    Most practical engineering systems design problems have multiple and conflicting objectives. Furthermore, the satisfactory attainment level for each objective ( requirement ) is likely uncertain early in the design process. Systems with long design cycle times will exhibit more of this uncertainty throughout the design process. This is further complicated if the system is expected to perform for a relatively long period of time, as now it will need to grow as new requirements are identified and new technologies are introduced. These points identify a need for a systems design technique that enables decision making amongst multiple objectives in the presence of uncertainty. Traditional design techniques deal with a single objective or a small number of objectives that are often aggregates of the overarching goals sought through the generation of a new system. Other requirements, although uncertain, are viewed as static constraints to this single or multiple objective optimization problem. With either of these formulations, enabling tradeoffs between the requirements, objectives, or combinations thereof is a slow, serial process that becomes increasingly complex as more criteria are added. This research proposal outlines a technique that attempts to address these and other idiosyncrasies associated with modern aerospace systems design. The proposed formulation first recasts systems design into a multiple criteria decision making problem. The now multiple objectives are decomposed to discover the critical characteristics of the objective space. Tradeoffs between the objectives are considered amongst these critical characteristics by comparison to a probabilistic ideal tradeoff solution. The proposed formulation represents a radical departure from traditional methods. A pitfall of this technique is in the validation of the solution: in a multi-objective sense, how can a decision maker justify a choice between non-dominated alternatives? A series of examples help the reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.

  8. Project Integration Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2008-01-01

    The Project Integration Architecture (PIA) is a distributed, object-oriented, conceptual, software framework for the generation, organization, publication, integration, and consumption of all information involved in any complex technological process in a manner that is intelligible to both computers and humans. In the development of PIA, it was recognized that in order to provide a single computational environment in which all information associated with any given complex technological process could be viewed, reviewed, manipulated, and shared, it is necessary to formulate all the elements of such a process on the most fundamental level. In this formulation, any such element is regarded as being composed of any or all of three parts: input information, some transformation of that input information, and some useful output information. Another fundamental principle of PIA is the assumption that no consumer of information, whether human or computer, can be assumed to have any useful foreknowledge of an element presented to it. Consequently, a PIA-compliant computing system is required to be ready to respond to any questions, posed by the consumer, concerning the nature of the proffered element. In colloquial terms, a PIA-compliant system must be prepared to provide all the information needed to place the element in context. To satisfy this requirement, PIA extends the previously established object-oriented- programming concept of self-revelation and applies it on a grand scale. To enable pervasive use of self-revelation, PIA exploits another previously established object-oriented-programming concept - that of semantic infusion through class derivation. By means of self-revelation and semantic infusion through class derivation, a consumer of information can inquire about the contents of all information entities (e.g., databases and software) and can interact appropriately with those entities. Other key features of PIA are listed.

  9. Symbolic play and language development.

    PubMed

    Orr, Edna; Geva, Ronny

    2015-02-01

    Symbolic play and language are known to be highly interrelated, but the developmental process involved in this relationship is not clear. Three hypothetical paths were postulated to explore how play and language drive each other: (1) direct paths, whereby initiation of basic forms in symbolic action or babbling, will be directly related to all later emerging language and motor outputs; (2) an indirect interactive path, whereby basic forms in symbolic action will be associated with more complex forms in symbolic play, as well as with babbling, and babbling mediates the relationship between symbolic play and speech; and (3) a dual path, whereby basic forms in symbolic play will be associated with basic forms of language, and complex forms of symbolic play will be associated with complex forms of language. We micro-coded 288 symbolic vignettes gathered during a yearlong prospective bi-weekly examination (N=14; from 6 to 18 months of age). Results showed that the age of initiation of single-object symbolic play correlates strongly with the age of initiation of later-emerging symbolic and vocal outputs; its frequency at initiation is correlated with frequency at initiation of babbling, later-emerging speech, and multi-object play in initiation. Results support the notion that a single-object play relates to the development of other symbolic forms via a direct relationship and an indirect relationship, rather than a dual-path hypothesis. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of the uncertainties concerning the biosphere on very long timescales, stylised biosphere models are shown to provide a useful point of reference in themselves and remain a valuable tool for nuclear waste disposal licencing procedures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    PubMed Central

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh; Webb-Robertson, Bobbie-Jo; Hafen, Ryan; Ramey, John; Rodland, Karin D.

    2012-01-01

    Introduction The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful molecular signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities for more sophisticated approaches to integrating purely statistical and expert knowledge-based approaches. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges that have been encountered in deriving valid and useful signatures of disease. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to identify predictive signatures of disease are key to future success in the biomarker field. We will describe our recommendations for possible approaches to this problem including metrics for the evaluation of biomarkers. PMID:23335946

  12. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh D.

    2013-01-01

    The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities both for purely statistical and expert knowledge-based approaches and would benefit from improved integration of the two. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges thatmore » have been encountered. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to biomarker discovery and characterization are key to future success in the biomarker field. We will describe our recommendations of possible approaches to this problem including metrics for the evaluation of biomarkers.« less

  13. Possible disruption of remote viewing by complex weak magnetic fields around the stimulus site and the possibility of accessing real phase space: a pilot study.

    PubMed

    Koren, S A; Persinger, M A

    2002-12-01

    In 2002 Persinger, Roll, Tiller, Koren, and Cook considered whether there are physical processes by which recondite information exists within the space and time of objects or events. The stimuli that compose this information might be directly detected within the whole brain without being processed by the typical sensory modalities. We tested the artist Ingo Swann who can reliably draw and describe randomly selected photographs sealed in envelopes in another room. In the present experiment the photographs were immersed continuously in repeated presentations (5 times per sec.) of one of two types of computer-generated complex magnetic field patterns whose intensities were less than 20 nT over most of the area. WINDOWS-generated but not DOS-generated patterns were associated with a marked decrease in Mr. Swann's accuracy. Whereas the DOS software generated exactly the same pattern, WINDOWS software phase-modulated the actual wave form resulting in an infinite bandwidth and complexity. We suggest that information obtained by processes attributed to "paranormal" phenomena have physical correlates that can be masked by weak, infinitely variable magnetic fields.

  14. Aggregated Indexing of Biomedical Time Series Data

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.

    2016-01-01

    Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298

  15. Improved silicon carbide for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas J.

    1987-01-01

    This is the second annual technical report entitled, Improved Silicon Carbide for Advanced Heat Engines, and includes work performed during the period February 16, 1986 to February 15, 1987. The program is conducted for NASA under contract NAS3-24384. The objective is the development of high strength, high reliability silicon carbide parts with complex shapes suitable for use in advanced heat engines. The fabrication methods used are to be adaptable for mass production of such parts on an economically sound basis. Injection molding is the forming method selected. This objective is to be accomplished in a two-phase program: (1) to achieve a 20 percent improvement in strength and a 100 percent increase in Weibull modulus of the baseline material; and (2) to produce a complex shaped part, a gas turbine rotor, for example, with the improved mechanical properties attained in the first phase. Eight tasks are included in the first phase covering the characterization of the properties of a baseline material, the improvement of those properties and the fabrication of complex shaped parts. Activities during the first contract year concentrated on two of these areas: fabrication and characterization of the baseline material (Task 1) and improvement of material and processes (Task 7). Activities during the second contract year included an MOR bar matrix study to improve mechanical properties (Task 2), materials and process improvements (Task 7), and a Ford-funded task to mold a turbocharger rotor with an improved material (Task 8).

  16. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  17. Functional coupling of sensorimotor and associative areas during a catching ball task: a qEEG coherence study

    PubMed Central

    2012-01-01

    Background Catching an object is a complex movement that involves not only programming but also effective motor coordination. Such behavior is related to the activation and recruitment of cortical regions that participates in the sensorimotor integration process. This study aimed to elucidate the cortical mechanisms involved in anticipatory actions when performing a task of catching an object in free fall. Methods Quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 20 healthy right-handed participants performed the catching ball task. We used the EEG coherence analysis to investigate subdivisions of alpha (8-12 Hz) and beta (12-30 Hz) bands, which are related to cognitive processing and sensory-motor integration. Results Notwithstanding, we found the main effects for the factor block; for alpha-1, coherence decreased from the first to sixth block, and the opposite effect occurred for alpha-2 and beta-2, with coherence increasing along the blocks. Conclusion It was concluded that to perform successfully our task, which involved anticipatory processes (i.e. feedback mechanisms), subjects exhibited a great involvement of sensory-motor and associative areas, possibly due to organization of information to process visuospatial parameters and further catch the falling object. PMID:22364485

  18. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  19. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  20. Process-Improvement Cost Model for the Emergency Department.

    PubMed

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  1. Soft-sensing model of temperature for aluminum reduction cell on improved twin support vector regression

    NASA Astrophysics Data System (ADS)

    Li, Tao

    2018-06-01

    The complexity of aluminum electrolysis process leads the temperature for aluminum reduction cells hard to measure directly. However, temperature is the control center of aluminum production. To solve this problem, combining some aluminum plant's practice data, this paper presents a Soft-sensing model of temperature for aluminum electrolysis process on Improved Twin Support Vector Regression (ITSVR). ITSVR eliminates the slow learning speed of Support Vector Regression (SVR) and the over-fit risk of Twin Support Vector Regression (TSVR) by introducing a regularization term into the objective function of TSVR, which ensures the structural risk minimization principle and lower computational complexity. Finally, the model with some other parameters as auxiliary variable, predicts the temperature by ITSVR. The simulation result shows Soft-sensing model based on ITSVR has short time-consuming and better generalization.

  2. Effects of Preretirement Work Complexity and Postretirement Leisure Activity on Cognitive Aging

    PubMed Central

    Finkel, Deborah; Pedersen, Nancy L.

    2016-01-01

    Objectives: We examined the influence of postretirement leisure activity on longitudinal associations between work complexity in main lifetime occupation and trajectories of cognitive change before and after retirement. Methods: Information on complexity of work with data, people, and things, leisure activity participation in older adulthood, and four cognitive factors (verbal, spatial, memory, and speed) was available from 421 individuals in the longitudinal Swedish Adoption/Twin Study of Aging. Participants were followed for an average of 14.2 years (SD = 7.1 years) and up to 23 years across eight cognitive assessments. Most of the sample (88.6%) completed at least three cognitive assessments. Results: Results of growth curve analyses indicated that higher complexity of work with people significantly attenuated cognitive aging in verbal skills, memory, and speed of processing controlling for age, sex, and education. When leisure activity was added, greater cognitive and physical leisure activity was associated with reduced cognitive aging in verbal skills, speed of processing, and memory (for cognitive activity only). Discussion: Engagement in cognitive or physical leisure activities in older adulthood may compensate for cognitive disadvantage potentially imposed by working in occupations that offer fewer cognitive challenges. These results may provide a platform to encourage leisure activity participation in those retiring from less complex occupations. PMID:25975289

  3. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  4. No Evidence for a Fixed Object Limit in Working Memory: Spatial Ensemble Representations Inflate Estimates of Working Memory Capacity for Complex Objects

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Alvarez, George A.

    2015-01-01

    A central question for models of visual working memory is whether the number of objects people can remember depends on object complexity. Some influential "slot" models of working memory capacity suggest that people always represent 3-4 objects and that only the fidelity with which these objects are represented is affected by object…

  5. The Scenario-Based Engineering Process (SEP): a user-centered approach for the development of health care systems.

    PubMed

    Harbison, K; Kelly, J; Burnell, L; Silva, J

    1995-01-01

    The Scenario-based Engineering Process (SEP) is a user-focused methodology for large and complex system design. This process supports new application development from requirements analysis with domain models to component selection, design and modification, implementation, integration, and archival placement. It is built upon object-oriented methodologies, domain modeling strategies, and scenario-based techniques to provide an analysis process for mapping application requirements to available components. We are using SEP in the health care applications that we are developing. The process has already achieved success in the manufacturing and military domains and is being adopted by many organizations. SEP should prove viable in any domain containing scenarios that can be decomposed into tasks.

  6. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  7. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  8. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  9. STARS Proceedings (3-4 December 1991)

    DTIC Science & Technology

    1991-12-04

    PROJECT PROCESS OBJECTIVES & ASSOCIATED METRICS: Prioritize ECPs: complexity & error-history measures 0 Make vs Buy decisions: Effort & Quality (or...history measures, error- proneness and past histories of trouble with particular modules are very useful measures. Make vs Buy decisions: Does the...Effort offset the gain in Quality relative to buy ... Effort and Quality (or defect rate) histories give helpful indications of how to make this decision

  10. Prescription Errors in Older Individuals with an Intellectual Disability: Prevalence and Risk Factors in the Healthy Ageing and Intellectual Disability Study

    ERIC Educational Resources Information Center

    Zaal, Rianne J.; van der Kaaij, Annemieke D. M.; Evenhuis, Heleen M.; van den Bemt, Patricia M. L. A.

    2013-01-01

    Prescribing pharmacotherapy for older individuals with an intellectual disability (ID) is a complex process, possibly leading to an increased risk of prescription errors. The objectives of this study were (1) to determine the prevalence of older individuals with an intellectual disability with at least one prescription error and (2) to identify…

  11. Interaction of chimera states in a multilayered network of nonlocally coupled oscillators

    NASA Astrophysics Data System (ADS)

    Goremyko, M. V.; Maksimenko, V. A.; Makarov, V. V.; Ghosh, D.; Bera, B.; Dana, S. K.; Hramov, A. E.

    2017-08-01

    The processes of formation and evolution of chimera states in the model of a multilayered network of nonlinear elements with complex coupling topology are studied. A two-layered network of nonlocally intralayer-coupled Kuramoto-Sakaguchi phase oscillators is taken as the object of investigation. Different modes implemented in this system upon variation of the degree of interlayer interaction are demonstrated.

  12. Joint Observational Research on Nocturnal Atmospheric Dispersion of Aerosols (JORNADA)

    DTIC Science & Technology

    2009-02-01

    physical processes in NBL . Research Progress: July 2008-January 2009 Objective 1. Analysis of the Stationarity of Mesoscale Turbulence in the...data allows for a more complete understanding of the nocturnal boundary layer ( NBL ). We have analyzed lidar measurements of plume meander and...dispersion and their relationship to the complexities of NBL structure. Plume Dispersion: Vertical plume dispersion parameters (σz) were derived

  13. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  14. False belief and counterfactual reasoning in a social environment.

    PubMed

    Van Hoeck, Nicole; Begtas, Elizabet; Steen, Johan; Kestemont, Jenny; Vandekerckhove, Marie; Van Overwalle, Frank

    2014-04-15

    Behavioral studies indicate that theory of mind and counterfactual reasoning are strongly related cognitive processes. In a neuroimaging study, we explored the common and distinct regions underlying these inference processes. We directly compared false belief reasoning (inferring an agent's false belief about an object's location or content) and counterfactual reasoning (inferring what the object's location or content would be if an agent had acted differently), both in contrast with a baseline condition of conditional reasoning (inferring what the true location or content of an object is). Results indicate that these three types of reasoning about social scenarios are supported by activations in the mentalizing network (left temporo-parietal junction and precuneus) and the executive control network (bilateral prefrontal cortex [PFC] and right inferior parietal lobule). In addition, representing a false belief or counterfactual state (both not directly observable in the external world) recruits additional activity in the executive control network (left dorsolateral PFC and parietal lobe). The results further suggest that counterfactual reasoning is a more complex cognitive process than false belief reasoning, showing stronger activation of the dorsomedial, left dorsolateral PFC, cerebellum and left temporal cortex. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Use of micro computed-tomography and 3D printing for reverse engineering of mouse embryo nasal capsule

    NASA Astrophysics Data System (ADS)

    Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.

    2016-03-01

    Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.

  16. Formalizing Knowledge in Multi-Scale Agent-Based Simulations

    PubMed Central

    Somogyi, Endre; Sluka, James P.; Glazier, James A.

    2017-01-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused. PMID:29338063

  17. 3D Model Generation From the Engineering Drawing

    NASA Astrophysics Data System (ADS)

    Vaský, Jozef; Eliáš, Michal; Bezák, Pavol; Červeňanská, Zuzana; Izakovič, Ladislav

    2010-01-01

    The contribution deals with the transformation of engineering drawings in a paper form into a 3D computer representation. A 3D computer model can be further processed in CAD/CAM system, it can be modified, archived, and a technical drawing can be then generated from it as well. The transformation process from paper form to the data one is a complex and difficult one, particularly owing to the different types of drawings, forms of displayed objects and encountered errors and deviations from technical standards. The algorithm for 3D model generating from an orthogonal vector input representing a simplified technical drawing of the rotational part is described in this contribution. The algorithm was experimentally implemented as ObjectARX application in the AutoCAD system and the test sample as the representation of the rotational part was used for verificaton.

  18. Formalizing Knowledge in Multi-Scale Agent-Based Simulations.

    PubMed

    Somogyi, Endre; Sluka, James P; Glazier, James A

    2016-10-01

    Multi-scale, agent-based simulations of cellular and tissue biology are increasingly common. These simulations combine and integrate a range of components from different domains. Simulations continuously create, destroy and reorganize constituent elements causing their interactions to dynamically change. For example, the multi-cellular tissue development process coordinates molecular, cellular and tissue scale objects with biochemical, biomechanical, spatial and behavioral processes to form a dynamic network. Different domain specific languages can describe these components in isolation, but cannot describe their interactions. No current programming language is designed to represent in human readable and reusable form the domain specific knowledge contained in these components and interactions. We present a new hybrid programming language paradigm that naturally expresses the complex multi-scale objects and dynamic interactions in a unified way and allows domain knowledge to be captured, searched, formalized, extracted and reused.

  19. Combustion research for gas turbine engines

    NASA Technical Reports Server (NTRS)

    Mularz, E. J.; Claus, R. W.

    1985-01-01

    Research on combustion is being conducted at Lewis Research Center to provide improved analytical models of the complex flow and chemical reaction processes which occur in the combustor of gas turbine engines and other aeropropulsion systems. The objective of the research is to obtain a better understanding of the various physical processes that occur in the gas turbine combustor in order to develop models and numerical codes which can accurately describe these processes. Activities include in-house research projects, university grants, and industry contracts and are classified under the subject areas of advanced numerics, fuel sprays, fluid mixing, and radiation-chemistry. Results are high-lighted from several projects.

  20. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  1. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    PubMed

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  2. Design and Use of a Learning Object for Finding Complex Polynomial Roots

    ERIC Educational Resources Information Center

    Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime

    2013-01-01

    Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…

  3. Methodological approach and tools for systems thinking in health systems research: technical assistants' support of health administration reform in the Democratic Republic of Congo as an application.

    PubMed

    Ribesse, Nathalie; Bossyns, Paul; Marchal, Bruno; Karemere, Hermes; Burman, Christopher J; Macq, Jean

    2017-03-01

    In the field of development cooperation, interest in systems thinking and complex systems theories as a methodological approach is increasingly recognised. And so it is in health systems research, which informs health development aid interventions. However, practical applications remain scarce to date. The objective of this article is to contribute to the body of knowledge by presenting the tools inspired by systems thinking and complexity theories and methodological lessons learned from their application. These tools were used in a case study. Detailed results of this study are in process for publication in additional articles. Applying a complexity 'lens', the subject of the case study is the role of long-term international technical assistance in supporting health administration reform at the provincial level in the Democratic Republic of Congo. The Methods section presents the guiding principles of systems thinking and complex systems, their relevance and implication for the subject under study, and the existing tools associated with those theories which inspired us in the design of the data collection and analysis process. The tools and their application processes are presented in the results section, and followed in the discussion section by the critical analysis of their innovative potential and emergent challenges. The overall methodology provides a coherent whole, each tool bringing a different and complementary perspective on the system.

  4. A mathematical model for foreign body reactions in 2D.

    PubMed

    Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping

    2011-02-01

    The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general.

  5. Processing and Structural Advantages of the Sylramic-iBN SiC Fiber for SiC/SiC Components

    NASA Technical Reports Server (NTRS)

    Yun, H. M.; Dicarlo, J. A.; Bhatt, R. T.; Hurst, J. B.

    2008-01-01

    The successful high-temperature application of complex-shaped SiC/SiC components will depend on achieving as high a fraction of the as-produced fiber strength as possible during component fabrication and service. Key issues center on a variety of component architecture, processing, and service-related factors that can reduce fiber strength, such as fiber-fiber abrasion during architecture shaping, surface chemical attack during interphase deposition and service, and intrinsic flaw growth during high-temperature matrix formation and composite creep. The objective of this paper is to show that the NASA-developed Sylramic-iBN SiC fiber minimizes many of these issues for state-of-the-art melt-infiltrated (MI) SiC/BN/SiC composites. To accomplish this, data from various mechanical tests are presented that compare how different high performance SiC fiber types retain strength during formation of complex architectures, during processing of BN interphases and MI matrices, and during simulated composite service at high temperatures.

  6. Exploration of dynamics in a complex person-centred intervention process based on health professionals' perspectives.

    PubMed

    Friberg, Febe; Wallengren, Catarina; Håkanson, Cecilia; Carlsson, Eva; Smith, Frida; Pettersson, Monica; Kenne Sarenmalm, Elisabeth; Sawatzky, Richard; Öhlén, Joakim

    2018-06-13

    The assessment and evaluation of practical and sustainable development of health care has become a major focus of investigation in health services research. A key challenge for researchers as well as decision-makers in health care is to understand mechanisms influencing how complex interventions work and become embedded in practice, which is significant for both evaluation and later implementation. In this study, we explored nurses' and surgeons' perspectives on performing and participating in a complex multi-centre person-centred intervention process that aimed to support patients diagnosed with colorectal cancer to feel prepared for surgery, discharge and recovery. Data consisted of retrospective interviews with 20 professionals after the intervention, supplemented with prospective conversational data and field notes from workshops and follow-up meetings (n = 51). The data were analysed to construct patterns in line with interpretive description. Although the participants highly valued components of the intervention, the results reveal influencing mechanisms underlying the functioning of the intervention, including multiple objectives, unclear mandates and competing professional logics. The results also reveal variations in processing the intervention focused on differences in using and talking about intervention components. The study indicates there are significant areas of ambiguity in understanding how theory-based complex clinical interventions work and in how interventions are socially constructed and co-created by professionals' experiences, assumptions about own professional practice, contextual conditions and the researchers' intentions. This process evaluation reveals insights into reasons for success or failure and contextual aspects associated with variations in outcomes. Thus, there is a need for further interpretive inquiry, and not only descriptive studies, of the multifaceted characters of complex clinical interventions and how the intervention components are actually shaped in constantly shifting contexts.

  7. The effort to close the gap: Tracking the development of illusory contour processing from childhood to adulthood with high-density electrical mapping

    PubMed Central

    Altschuler, Ted S.; Molholm, Sophie; Butler, John S.; Mercier, Manuel R.; Brandwein, Alice B.; Foxe, John J.

    2014-01-01

    The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230-400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N= 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern - engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5 years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. PMID:24365674

  8. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .

  9. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin

    PubMed Central

    Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.

    2016-01-01

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali T-Raissi

    The aim of this work was to assess issues of cost, and performance associated with the production and storage of hydrogen via following three feedstocks: sub-quality natural gas (SQNG), ammonia (NH{sub 3}), and water. Three technology areas were considered: (1) Hydrogen production utilizing SQNG resources, (2) Hydrogen storage in ammonia and amine-borane complexes for fuel cell applications, and (3) Hydrogen from solar thermochemical cycles for splitting water. This report summarizes our findings with the following objectives: Technoeconomic analysis of the feasibility of the technology areas 1-3; Evaluation of the hydrogen production cost by technology areas 1; and Feasibility of ammoniamore » and/or amine-borane complexes (technology areas 2) as a means of hydrogen storage on-board fuel cell powered vehicles. For each technology area, we reviewed the open literature with respect to the following criteria: process efficiency, cost, safety, and ease of implementation and impact of the latest materials innovations, if any. We employed various process analysis platforms including FactSage chemical equilibrium software and Aspen Technologies AspenPlus and HYSYS chemical process simulation programs for determining the performance of the prospective hydrogen production processes.« less

  11. Impact on CDC Guideline Compliance After Incorporating Pharmacy in a Pneumococcal Vaccination Screening Process.

    PubMed

    Pickren, Elizabeth; Crane, Brad

    2016-12-01

    Background: Centers for Disease Control and Prevention (CDC) guidelines for pneumococcal vaccinations were updated in 2014. Given the complexity of the guidelines and the fact that hospitals are no longer required to keep records for pneumococcal vaccinations, many hospitals are determining whether to continue this service. Objective: The primary objective of this study was to determine the impact on compliance with the revised pneumococcal vaccination guidelines from the CDC after involving pharmacy in the screening and selection processes. Secondary objectives were to determine the impact of the new process on inappropriate vaccination duplications, the time spent by pharmacy on assessments, and financial outcomes. Methods: This institutional review board (IRB)-approved, retrospective, cohort study examined all patients who received a pneumococcal vaccination from January to February 2016 after implementing a new process whereby pharmacy performed pneumococcal vaccination screening and selection (intervention group). These patients were compared to patients who received a pneumococcal vaccination from January to February 2015 (control group). Results: Of 274 patients who received a pneumococcal vaccine, 273 were included in the study. Compliance to CDC guidelines increased from 42% to 97%. Noncompliant duplications decreased from 16% to 2%. In the intervention group, labor cost for assessments and expenditure for vaccines increased. For Medicare patients, the increased reimbursement balanced the increased expenditure in the intervention group. Conclusions: Involving pharmacy in the pneumococcal vaccine screening and selection process improves compliance to CDC guidelines, but further clinical and financial analysis is needed to determine financial sustainability of the new process.

  12. Biomimetics: lessons from nature--an overview.

    PubMed

    Bhushan, Bharat

    2009-04-28

    Nature has developed materials, objects and processes that function from the macroscale to the nanoscale. These have gone through evolution over 3.8 Gyr. The emerging field of biomimetics allows one to mimic biology or nature to develop nanomaterials, nanodevices and processes. Properties of biological materials and surfaces result from a complex interplay between surface morphology and physical and chemical properties. Hierarchical structures with dimensions of features ranging from the macroscale to the nanoscale are extremely common in nature to provide properties of interest. Molecular-scale devices, superhydrophobicity, self-cleaning, drag reduction in fluid flow, energy conversion and conservation, high adhesion, reversible adhesion, aerodynamic lift, materials and fibres with high mechanical strength, biological self-assembly, antireflection, structural coloration, thermal insulation, self-healing and sensory-aid mechanisms are some of the examples found in nature that are of commercial interest. This paper provides a broad overview of the various objects and processes of interest found in nature and applications under development or available in the marketplace.

  13. Data Quality Objectives Process for Designation of K Basins Debris

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WESTCOTT, J.L.

    2000-05-22

    The U.S. Department of Energy has developed a schedule and approach for the removal of spent fuels, sludge, and debris from the K East (KE) and K West (KW) Basins, located in the 100 Area at the Hanford Site. The project that is the subject of this data quality objective (DQO) process is focused on the removal of debris from the K Basins and onsite disposal of the debris at the Environmental Restoration Disposal Facility (ERDF). This material previously has been dispositioned at the Hanford Low-Level Burial Grounds (LLBGs) or Central Waste Complex (CWC). The goal of this DQO processmore » and the resulting Sampling and Analysis Plan (SAP) is to provide the strategy for characterizing and designating the K-Basin debris to determine if it meets the Environmental Restoration Disposal Facility Waste Acceptance Criteria (WAC), Revision 3 (BHI 1998). A critical part of the DQO process is to agree on regulatory and WAC interpretation, to support preparation of the DQO workbook and SAP.« less

  14. Neuro-inspired smart image sensor: analog Hmax implementation

    NASA Astrophysics Data System (ADS)

    Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman

    2015-03-01

    Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.

  15. The 1990 update to strategy for exploration of the inner planets

    NASA Technical Reports Server (NTRS)

    Esposito, Larry W.; Pepin, Robert O.; Cheng, Andrew F.; Jakosky, Bruce M.; Lunine, Jonathan I.; Mcfadden, Lucy-Ann; Mckay, Christopher P.; Mckinnon, William B.; Muhleman, Duane O.; Nicholson, Philip

    1990-01-01

    The Committee on Planetary and Lunar Exploration (COMPLEX) has undertaken to review and revise the 1978 report Strategy for Exploration of the Inner Planets, 1977-1987. The committee has found the 1978 report to be generally still pertinent. COMPLEX therefore issues its new report in the form of an update. The committee reaffirms the basic objectives for exploration of the planets: to determine the present state of the planets and their satellites, to understand the processes active now and at the origin of the solar system, and to understand planetary evolution, including appearance of life and its relation to the chemical history of the solar system.

  16. Data Relationships: Towards a Conceptual Model of Scientific Data Catalogs

    NASA Astrophysics Data System (ADS)

    Hourcle, J. A.

    2008-12-01

    As the amount of data, types of processing and storage formats increase, the total number of record permutations increase dramatically. The result is an overwhelming number of records that make identifying the best data object to answer a user's needs more difficult. The issue is further complicated as each archive's data catalog may be designed around different concepts - - anything from individual files to be served, series of similarly generated and processed data, or something entirely different. Catalogs may not only be flat tables, but may be structured as multiple tables with each table being a different data series, or a normalized structure of the individual data files. Merging federated search results from archives with different catalog designs can create situations where the data object of interest is difficult to find due to an overwhelming number of seemingly similar or entirely unwanted records. We present a reference model for discussing data catalogs and the complex relationships between similar data objects. We show how the model can be used to improve scientist's ability to quickly identify the best data object for their purposes and discuss technical issues required to use this model in a federated system.

  17. A multi-faceted approach to promote knowledge translation platforms in eastern Mediterranean countries: climate for evidence-informed policy

    PubMed Central

    2012-01-01

    Objectives Limited work has been done to promote knowledge translation (KT) in the Eastern Mediterranean Region (EMR). The objectives of this study are to: 1.assess the climate for evidence use in policy; 2.explore views and practices about current processes and weaknesses of health policymaking; 3.identify priorities including short-term requirements for policy briefs; and 4.identify country-specific requirements for establishing KT platforms. Methods Senior policymakers, stakeholders and researchers from Algeria, Bahrain, Egypt, Iran, Jordan, Lebanon, Oman, Sudan, Syria, Tunisia, and Yemen participated in this study. Questionnaires were used to assess the climate for use of evidence and identify windows of opportunity and requirements for policy briefs and for establishing KT platforms. Current processes and weaknesses of policymaking were appraised using case study scenarios. Closed-ended questions were analyzed descriptively. Qualitative data was analyzed using thematic analysis. Results KT activities were not frequently undertaken by policymakers and researchers in EMR countries, research evidence about high priority policy issues was rarely made available, and interaction between policymakers and researchers was limited, and policymakers rarely identified or created places for utilizing research evidence in decision-making processes. Findings emphasized the complexity of policymaking. Donors, political regimes, economic goals and outdated laws were identified as key drivers. Lack of policymakers’ abilities to think strategically, constant need to make quick decisions, limited financial resources, and lack of competent and trained human resources were suggested as main weaknesses. Conclusion Despite the complexity of policymaking processes in countries from this region, the absence of a structured process for decision making, and the limited engagement of policymakers and researchers in KT activities, there are windows of opportunity for moving towards more evidence informed policymaking. PMID:22559007

  18. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  19. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  20. Contribution of the Multi Attribute Value Theory to conflict resolution in groundwater management. Application to the Mancha Oriental groundwater system, Spain

    NASA Astrophysics Data System (ADS)

    Apperl, B.; Andreu, J.; Karjalainen, T. P.; Pulido-Velazquez, M.

    2014-09-01

    The implementation of the EU Water Framework Directive demands participatory water resource management approaches. Decision making in groundwater quantity and quality management is complex because of the existence of many independent actors, heterogeneous stakeholder interests, multiple objectives, different potential policies, and uncertain outcomes. Conflicting stakeholder interests have been often identified as an impediment to the realization and success of water regulations and policies. The management of complex groundwater systems requires clarifying stakeholders' positions (identifying stakeholders preferences and values), improving transparency with respect to outcomes of alternatives, and moving the discussion from the selection of alternatives towards definition of fundamental objectives (value-thinking approach), what facilitates negotiation. The aims of the study are to analyse the potential of the multi attribute value theory for conflict resolution in groundwater management and to evaluate the benefit of stakeholder incorporation in the different stages of the planning process to find an overall satisfying solution for groundwater management. The research was conducted in the Mancha Oriental groundwater system (Spain), subject to an intensive use of groundwater for irrigation. A complex set of objectives and attributes were defined, and the management alternatives were created by a combination of different fundamental actions, considering different implementation stages and future changes in water resources availability. Interviews were conducted with representative stakeholder groups using an interactive platform, showing simultaneously the consequences of changes of preferences to the alternative ranking. Results show that the acceptation of alternatives depends strongly on the combination of measures and the implementation stages. Uncertainties of the results were notable but did not influence heavily on the alternative ranking. The expected reduction of future groundwater resources by climate change increases the conflict potential. The implementation of the method to a very complex case study, with many conflicting objectives and alternatives and uncertain outcomes, including future scenarios under water limiting conditions, illustrate the potential of the method for supporting management decisions.

  1. Contribution of the multi-attribute value theory to conflict resolution in groundwater management - application to the Mancha Oriental groundwater system, Spain

    NASA Astrophysics Data System (ADS)

    Apperl, B.; Pulido-Velazquez, M.; Andreu, J.; Karjalainen, T. P.

    2015-03-01

    The implementation of the EU Water Framework Directive demands participatory water resource management approaches. Decision making in groundwater quantity and quality management is complex because of the existence of many independent actors, heterogeneous stakeholder interests, multiple objectives, different potential policies, and uncertain outcomes. Conflicting stakeholder interests have often been identified as an impediment to the realisation and success of water regulations and policies. The management of complex groundwater systems requires the clarification of stakeholders' positions (identifying stakeholder preferences and values), improving transparency with respect to outcomes of alternatives, and moving the discussion from the selection of alternatives towards the definition of fundamental objectives (value-thinking approach), which facilitates negotiation. The aims of the study are to analyse the potential of the multi-attribute value theory for conflict resolution in groundwater management and to evaluate the benefit of stakeholder incorporation into the different stages of the planning process, to find an overall satisfying solution for groundwater management. The research was conducted in the Mancha Oriental groundwater system (Spain), subject to intensive use of groundwater for irrigation. A complex set of objectives and attributes was defined, and the management alternatives were created by a combination of different fundamental actions, considering different implementation stages and future changes in water resource availability. Interviews were conducted with representative stakeholder groups using an interactive platform, showing simultaneously the consequences of changes in preferences to the alternative ranking. Results show that the approval of alternatives depends strongly on the combination of measures and the implementation stages. Uncertainties in the results were notable, but did not influence the alternative ranking heavily. The expected reduction in future groundwater resources by climate change increases the conflict potential. The implementation of the method in a very complex case study, with many conflicting objectives and alternatives and uncertain outcomes, including future scenarios under water limiting conditions, illustrates the potential of the method for supporting management decisions.

  2. Analysis of CAD Model-based Visual Tracking for Microassembly using a New Block Set for MATLAB/Simulink

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, Andrey V.; Laurent, Guillaume J.; Clévy, Cédric; Tamadazte, Brahim; Lutz, Philippe

    2015-10-01

    Microassembly is an innovative alternative to the microfabrication process of MOEMS, which is quite complex. It usually implies the use of microrobots controlled by an operator. The reliability of this approach has been already confirmed for micro-optical technologies. However, the characterization of assemblies has shown that the operator is the main source of inaccuracies in the teleoperated microassembly. Therefore, there is great interest in automating the microassembly process. One of the constraints of automation in microscale is the lack of high precision sensors capable to provide the full information about the object position. Thus, the usage of visual-based feedback represents a very promising approach allowing to automate the microassembly process. The purpose of this article is to characterize the techniques of object position estimation based on the visual data, i.e., visual tracking techniques from the ViSP library. These algorithms enables a 3-D object pose using a single view of the scene and the CAD model of the object. The performance of three main types of model-based trackers is analyzed and quantified: edge-based, texture-based and hybrid tracker. The problems of visual tracking in microscale are discussed. The control of the micromanipulation station used in the framework of our project is performed using a new Simulink block set. Experimental results are shown and demonstrate the possibility to obtain the repeatability below 1 µm.

  3. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  4. Optical digital microscopy for cyto- and hematological studies in vitro

    NASA Astrophysics Data System (ADS)

    Ganilova, Yu. A.; Dolmashkin, A. A.; Doubrovski, V. A.; Yanina, I. Yu.; Tuchin, V. V.

    2013-08-01

    The dependence of the spatial resolution and field of view of an optical microscope equipped with a CCD camera on the objective magnification has been experimentally investigated. Measurement of these characteristics has shown that a spatial resolution of 20-25 px/μm at a field of view of about 110 μm is quite realistic; this resolution is acceptable for a detailed study of the processes occurring in cell. It is proposed to expand the dynamic range of digital camera by measuring and approximating its light characteristics with subsequent plotting of the corresponding calibration curve. The biological objects of study were human adipose tissue cells, as well as erythrocytes and their immune complexes in human blood; both objects have been investigated in vitro. Application of optical digital microscopy for solving specific problems of cytology and hematology can be useful in both biomedical studies in experiments with objects of nonbiological origin.

  5. The BEFWM system for detection and phase conjugation of a weak laser beam

    NASA Astrophysics Data System (ADS)

    Khizhnyak, Anatoliy; Markov, Vladimir

    2007-09-01

    Real environmental conditions, such as atmospheric turbulence and aero-optics effects, make practical implementation of the object-in-the-loop (TIL) algorithm a very difficult task, especially when the system is set to operate with a signal from the diffuse surface image-resolved object. The problem becomes even more complex since for the remote object the intensity of the returned signal is extremely low. This presentation discusses the results of an analysis and experimental verification of a thresholdless coherent signal receiving system, capable not only in high-sensitivity detection of an ultra weak object-scattered light, but also in its high-gain amplification and phase conjugation. The process of coherent detection by using the Brillouin Enhanced Four Wave Mixing (BEFWM) enables retrieval of complete information on the received signal, including accurate measurement of its wavefront. This information can be used for direct real-time control of the adaptive mirror.

  6. Free-form geometric modeling by integrating parametric and implicit PDEs.

    PubMed

    Du, Haixia; Qin, Hong

    2007-01-01

    Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.

  7. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  8. Control Software for the VERITAS Cerenkov Telescope System

    NASA Astrophysics Data System (ADS)

    Krawczynski, H.; Olevitch, M.; Sembroski, G.; Gibbs, K.

    2003-07-01

    The VERITAS collab oration is developing a system of initially 4 and ˇ eventually 7 Cerenkov telescopes of the 12 m diameter class for high sensitivity gamma-ray astronomy in the >50 GeV energy range. In this contribution we describe the software that controls and monitors the various VERITAS subsystems. The software uses an object-oriented approach to cop e with the complexities that arise from using sub-groups of the 7 VERITAS telescopes to observe several sources at the same time. Inter-pro cess communication is based on the CORBA object Request Broker proto col and watch-dog processes monitor the sub-system performance.

  9. Least-squares luma-chroma demultiplexing algorithm for Bayer demosaicking.

    PubMed

    Leung, Brian; Jeon, Gwanggil; Dubois, Eric

    2011-07-01

    This paper addresses the problem of interpolating missing color components at the output of a Bayer color filter array (CFA), a process known as demosaicking. A luma-chroma demultiplexing algorithm is presented in detail, using a least-squares design methodology for the required bandpass filters. A systematic study of objective demosaicking performance and system complexity is carried out, and several system configurations are recommended. The method is compared with other benchmark algorithms in terms of CPSNR and S-CIELAB ∆E∗ objective quality measures and demosaicking speed. It was found to provide excellent performance and the best quality-speed tradeoff among the methods studied.

  10. COMPLEX CONDITIONAL CONTROL BY PIGEONS IN A CONTINUOUS VIRTUAL ENVIRONMENT

    PubMed Central

    Qadri, Muhammad A. J.; Reid, Sean; Cook, Robert G.

    2016-01-01

    We tested two pigeons in a continuously streaming digital environment. Using animation software that constantly presented a dynamic, three-dimensional (3D) environment, the animals were tested with a conditional object identification task. The correct object at a given time depended on the virtual context currently streaming in front of the pigeon. Pigeons were required to accurately peck correct target objects in the environment for food reward, while suppressing any pecks to intermixed distractor objects which delayed the next object’s presentation. Experiment 1 established that the pigeons’ discrimination of two objects could be controlled by the surface material of the digital terrain. Experiment 2 established that the pigeons’ discrimination of four objects could be conjunctively controlled by both the surface material and topography of the streaming environment. These experiments indicate that pigeons can simultaneously process and use at least two context cues from a streaming environment to control their identification behavior of passing objects. These results add to the promise of testing interactive digital environments with animals to advance our understanding of cognition and behavior. PMID:26781058

  11. The neural basis of precise visual short-term memory for complex recognisable objects.

    PubMed

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Distributed service-based approach for sensor data fusion in IoT environments.

    PubMed

    Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A; Gutiérrez-Guerrero, José M; Muros-Cobos, Jesús L

    2014-10-15

    The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments.

  13. A neighboring structure reconstructed matching algorithm based on LARK features

    NASA Astrophysics Data System (ADS)

    Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa

    2015-11-01

    Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.

  14. Contour fractal analysis of grains

    NASA Astrophysics Data System (ADS)

    Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB

    2017-06-01

    Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.

  15. Distributed Service-Based Approach for Sensor Data Fusion in IoT Environments

    PubMed Central

    Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A.; Gutiérrez-Guerrero, José M.; Muros-Cobos, Jesús L.

    2014-01-01

    The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments. PMID:25320907

  16. An fMRI Study of Episodic Memory: Retrieval of Object, Spatial, and Temporal Information

    PubMed Central

    Hayes, Scott M.; Ryan, Lee; Schnyer, David M.; Nadel, Lynn

    2011-01-01

    Sixteen participants viewed a videotaped tour of 4 houses, highlighting a series of objects and their spatial locations. Participants were tested for memory of object, spatial, and temporal order information while undergoing functional Magnetic Resonance Imaging. Preferential activation was observed in right parahippocampal gyrus during the retrieval of spatial location information. Retrieval of contextual information (spatial location and temporal order) was associated with activation in right dorsolateral prefrontal cortex. In bilateral posterior parietal regions, greater activation was associated with processing of visual scenes, regardless of the memory judgment. These findings support current theories positing roles for frontal and medial temporal regions during episodic retrieval and suggest a specific role for the hippocampal complex in the retrieval of spatial location information PMID:15506871

  17. Reflecting on explanatory ability: A mechanism for detecting gaps in causal knowledge.

    PubMed

    Johnson, Dan R; Murphy, Meredith P; Messer, Riley M

    2016-05-01

    People frequently overestimate their understanding-with a particularly large blind-spot for gaps in their causal knowledge. We introduce a metacognitive approach to reducing overestimation, termed reflecting on explanatory ability (REA), which is briefly thinking about how well one could explain something in a mechanistic, step-by-step, causally connected manner. Nine experiments demonstrated that engaging in REA just before estimating one's understanding substantially reduced overestimation. Moreover, REA reduced overestimation with nearly the same potency as generating full explanations, but did so 20 times faster (although only for high complexity objects). REA substantially reduced overestimation by inducing participants to quickly evaluate an object's inherent causal complexity (Experiments 4-7). REA reduced overestimation by also fostering step-by-step, causally connected processing (Experiments 2 and 3). Alternative explanations for REA's effects were ruled out including a general conservatism account (Experiments 4 and 5) and a covert explanation account (Experiment 8). REA's overestimation-reduction effect generalized beyond objects (Experiments 1-8) to sociopolitical policies (Experiment 9). REA efficiently detects gaps in our causal knowledge with implications for improving self-directed learning, enhancing self-insight into vocational and academic abilities, and even reducing extremist attitudes. (c) 2016 APA, all rights reserved).

  18. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  19. Towards physical principles of biological evolution

    NASA Astrophysics Data System (ADS)

    Katsnelson, Mikhail I.; Wolf, Yuri I.; Koonin, Eugene V.

    2018-03-01

    Biological systems reach organizational complexity that far exceeds the complexity of any known inanimate objects. Biological entities undoubtedly obey the laws of quantum physics and statistical mechanics. However, is modern physics sufficient to adequately describe, model and explain the evolution of biological complexity? Detailed parallels have been drawn between statistical thermodynamics and the population-genetic theory of biological evolution. Based on these parallels, we outline new perspectives on biological innovation and major transitions in evolution, and introduce a biological equivalent of thermodynamic potential that reflects the innovation propensity of an evolving population. Deep analogies have been suggested to also exist between the properties of biological entities and processes, and those of frustrated states in physics, such as glasses. Such systems are characterized by frustration whereby local state with minimal free energy conflict with the global minimum, resulting in ‘emergent phenomena’. We extend such analogies by examining frustration-type phenomena, such as conflicts between different levels of selection, in biological evolution. These frustration effects appear to drive the evolution of biological complexity. We further address evolution in multidimensional fitness landscapes from the point of view of percolation theory and suggest that percolation at level above the critical threshold dictates the tree-like evolution of complex organisms. Taken together, these multiple connections between fundamental processes in physics and biology imply that construction of a meaningful physical theory of biological evolution might not be a futile effort. However, it is unrealistic to expect that such a theory can be created in one scoop; if it ever comes to being, this can only happen through integration of multiple physical models of evolutionary processes. Furthermore, the existing framework of theoretical physics is unlikely to suffice for adequate modeling of the biological level of complexity, and new developments within physics itself are likely to be required.

  20. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

Top