Studies on combined model based on functional objectives of large scale complex engineering
NASA Astrophysics Data System (ADS)
Yuting, Wang; Jingchun, Feng; Jiabao, Sun
2018-03-01
As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.
The Complex Exposure History of a Very Large L/LL5 Chondrite Shower: Queen Alexandra Range 90201
NASA Technical Reports Server (NTRS)
Welten, K. C.; Nishiizumi, K.; Caffee, M. W.; Hillegonds, D. J.; Leya, I.; Wieler, R.; Mararik, J.
2004-01-01
Compared to iron meteorites, large stony meteorites (less than 100 kg) are relatively rare. Most large stony meteoroids fragment during atmospheric entry, producing large meteorite showers. Interestingly, many of these large chondrites, such as Bur Gheluai, Gold Basin, Jilin and Tsarev appear to have a complex exposure history with a first-stage exposure on the parent body. The question is whether complex exposure histories are simply more readily detected in large objects or large objects are more likely to experience a complex exposure. Investigation of these two hypotheses is the motivation for this work in which we report on the exposure history of QUE 90201, a large L/LL5 chondrite shower found near Queen Alexandra Range, Antarctica. Previous cosmogenic nuclide studies have led to the consensus that most of the approx. 2000 L5 and LL5 chondrites from the QUE area are derived from a single object with a pre-atmospheric radius of 1-2 m. The terrestrial age of the QUE 90201 shower was determined at 125 20 kyr. Here, we present a more complete set of cosmogenic radionuclide results in the metal and stone fractions of eleven L/LL5 chondrites from the QUE stranding area, as well as noble gases in seven of these samples. The main goal of this work is to unravel the cosmic-ray exposure history of the QUE 90201 meteoroid. In addition, we will discuss the pre-atmospheric size and exposure history of QUE 93013 (H5) and 93081 (H4) with similar shielding conditions as the QUE 90201 shower and a terrestrial age of 145 +/- 25 kyr.
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
ERIC Educational Resources Information Center
Dorrepaal, Ethy; Thomaes, Kathleen; Smit, Johannes H.; van Balkom, Anton J. L. M.; van Dyck, Richard; Veltman, Dick J.; Draijer, Nel
2010-01-01
Objective: This study tests a Stabilizing Group Treatment protocol, designed for the management of the long-term sequelae of child abuse, that is, Complex Posttraumatic Stress Disorder (Complex PTSD). Evidence-based treatment for this subgroup of PTSD patients is largely lacking. This stabilizing treatment aims at improving Complex PTSD using…
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
Prediction of Human Activity by Discovering Temporal Sequence Patterns.
Li, Kang; Fu, Yun
2014-08-01
Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.
Reliability Standards of Complex Engineering Systems
NASA Astrophysics Data System (ADS)
Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.
2017-11-01
Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.
From path models to commands during additive printing of large-scale architectural designs
NASA Astrophysics Data System (ADS)
Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.
2018-05-01
The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.
Actions, Objectives & Concerns. Human Parameters for Architectural Design.
ERIC Educational Resources Information Center
Lasswell, Thomas E.; And Others
An experiment conducted at California State College, Los Angeles, to test the value of social-psychological research in defining building needs is described. The problems of how to identify and synthesize the disparate objectives, concerns and actions of the groups who use or otherwise have an interest in large and complex buildings is discussed.…
NASA Astrophysics Data System (ADS)
Kolkoori, S.; Wrobel, N.; Osterloh, K.; Zscherpel, U.; Ewert, U.
2013-09-01
Radiological inspections, in general, are the nondestructive testing (NDT) methods to detect the bulk of explosives in large objects. In contrast to personal luggage, cargo or building components constitute a complexity that may significantly hinder the detection of a threat by conventional X-ray transmission radiography. In this article, a novel X-ray backscatter technique is presented for detecting suspicious objects in a densely packed large object with only a single sided access. It consists of an X-ray backscatter camera with a special twisted slit collimator for imaging backscattering objects. The new X-ray backscatter camera is not only imaging the objects based on their densities but also by including the influences of surrounding objects. This unique feature of the X-ray backscatter camera provides new insights in identifying the internal features of the inspected object. Experimental mock-ups were designed imitating containers with threats among a complex packing as they may be encountered in reality. We investigated the dependence of the quality of the X-ray backscatter image on (a) the exposure time, (b) multiple exposures, (c) the distance between object and slit camera, and (d) the width of the slit. At the end, the significant advantages of the presented X-ray backscatter camera in the context of aviation and port security are discussed.
C++, objected-oriented programming, and astronomical data models
NASA Technical Reports Server (NTRS)
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method
NASA Astrophysics Data System (ADS)
Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong
2017-09-01
This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.
Large nebular complexes in the northern portion of the galaxy
NASA Technical Reports Server (NTRS)
Webster, W. J., Jr.
1971-01-01
Several northern complexes of ionized hydrogen, stars, and possibly nonthermal radio emission are known whose properties are similiar to those of the Gum nebula. Among the best known complexes are the Ori I and Ceph IV associations and IC 1795, IC 1805, and IC 1848. Each of these complexes contains an extended ring structure and requires more excitation than is available from the known early stars. The properties of these objects were examined and many of the properties of the Gum nebula are common to such galactic complexes.
Memory for Complex Visual Objects but Not for Allocentric Locations during the First Year of Life
ERIC Educational Resources Information Center
Dupierrix, Eve; Hillairet de Boisferon, Anne; Barbeau, Emmanuel; Pascalis, Olivier
2015-01-01
Although human infants demonstrate early competence to retain visual information, memory capacities during infancy remain largely undocumented. In three experiments, we used a Visual Paired Comparison (VPC) task to examine abilities to encode identity (Experiment 1) and spatial properties (Experiments 2a and 2b) of unfamiliar complex visual…
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
2014-05-21
simulating air-water free -surface flow, fluid-object interaction (FOI), and fluid-structure interaction (FSI) phenomena for complex geometries, and...with no limitations on the motion of the free surface, and with particular emphasis on ship hydrodynamics. The following specific research objectives...were identified for this project: 1) Development of a theoretical framework for free -surface flow, FOI and FSI that is a suitable starting point
NASA Astrophysics Data System (ADS)
Tsuboi, Masato; Kitamura, Yoshimi; Tsutsumi, Takahiro; Uehara, Kenta; Miyoshi, Makoto; Miyawaki, Ryosuke; Miyazaki, Atsushi
2017-11-01
The Galactic Center is the nuclear region of the nearest spiral galaxy, the Milky Way, and contains the supermassive black hole with M˜ 4× {10}6 {M}⊙ , Sagittarius A* (Sgr A*). One of the basic questions about the Galactic Center is whether or not Sgr A* is the only “massive” black hole in the region. The IRS13E complex is a very intriguing infrared (IR) object that contains a large dark mass comparable to the mass of an intermediate mass black hole (IMBH) from the proper motions of the main member stars. However, the existence of the IMBH remains controversial. There are some objections to accepting the existence of the IMBH. In this study, we detected ionized gas with a very large velocity width ({{Δ }}{v}{FWZI}˜ 650 km s-1) and a very compact size (r˜ 400 au) in the complex using the Atacama Large Millimeter/submillimeter Array (ALMA). We also found an extended component connecting with the compact ionized gas. The properties suggest that this is an ionized gas flow on the Keplerian orbit with high eccentricity. The enclosed mass is estimated to be {10}4 {M}⊙ by the analysis of the orbit. The mass does not conflict with the upper limit mass of the IMBH around Sgr A*, which is derived by the long-term astrometry with the Very Long Baseline Array (VLBA). In addition, the object probably has an X-ray counterpart. Consequently, a very fascinating possibility is that the detected ionized gas is rotating around an IMBH embedded in the IRS13E complex.
Trajectory-probed instability and statistics of desynchronization events in coupled chaotic systems
NASA Astrophysics Data System (ADS)
de Oliveira, Gilson F.; Chevrollier, Martine; Passerat de Silans, Thierry; Oriá, Marcos; de Souza Cavalcante, Hugo L. D.
2015-11-01
Complex systems, such as financial markets, earthquakes, and neurological networks, exhibit extreme events whose mechanisms of formation are not still completely understood. These mechanisms may be identified and better studied in simpler systems with dynamical features similar to the ones encountered in the complex system of interest. For instance, sudden and brief departures from the synchronized state observed in coupled chaotic systems were shown to display non-normal statistical distributions similar to events observed in the complex systems cited above. The current hypothesis accepted is that these desynchronization events are influenced by the presence of unstable object(s) in the phase space of the system. Here, we present further evidence that the occurrence of large events is triggered by the visitation of the system's phase-space trajectory to the vicinity of these unstable objects. In the system studied here, this visitation is controlled by a single parameter, and we exploit this feature to observe the effect of the visitation rate in the overall instability of the synchronized state. We find that the probability of escapes from the synchronized state and the size of those desynchronization events are enhanced in attractors whose shapes permit the chaotic trajectories to approach the region of strong instability. This result shows that the occurrence of large events requires not only a large local instability to amplify noise, or to amplify the effect of parameter mismatch between the coupled subsystems, but also that the trajectories of the system wander close to this local instability.
ERIC Educational Resources Information Center
Gan, Zhengdong
2012-01-01
This study, which is part of a large-scale study of using objective measures to validate assessment rating scales and assessment tasks in a high-profile school-based assessment initiative in Hong Kong, examined how grammatical complexity measures relate to task type and analytic evaluations of students' speaking proficiency in a classroom-based…
ERIC Educational Resources Information Center
Fallon, Barbara; Trocme, Nico; MacLaurin, Bruce
2011-01-01
Objective: To examine evidence available in large-scale North American datasets on child abuse and neglect that can assist in understanding the complexities of child protection case classifications. Methods: A review of child abuse and neglect data from large North American epidemiological studies including the Canadian Incidence Study of Reported…
R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove
2016-01-01
The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...
Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian
2018-04-03
Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.
Platform options for the Space Station program
NASA Technical Reports Server (NTRS)
Mangano, M. J.; Rowley, R. W.
1986-01-01
Platforms for polar and 28.5 deg orbits were studied to determine the platform requirements and characteristics necessary to support the science objectives. Large platforms supporting the Earth-Observing System (EOS) were initially studied. Co-orbiting platforms were derived from these designs. Because cost estimates indicated that the large platform approach was likely to be too expensive, require several launches, and generally be excessively complex, studies of small platforms were undertaken. Results of these studies show the small platform approach to be technically feasible at lower overall cost. All designs maximized hardware inheritance from the Space Station program to reduce costs. Science objectives as defined at the time of these studies are largely achievable.
A probabilistic framework for identifying biosignatures using Pathway Complexity
NASA Astrophysics Data System (ADS)
Marshall, Stuart M.; Murray, Alastair R. G.; Cronin, Leroy
2017-11-01
One thing that discriminates living things from inanimate matter is their ability to generate similarly complex or non-random structures in a large abundance. From DNA sequences to folded protein structures, living cells, microbial communities and multicellular structures, the material configurations in biology can easily be distinguished from non-living material assemblies. Many complex artefacts, from ordinary bioproducts to human tools, though they are not living things, are ultimately produced by biological processes-whether those processes occur at the scale of cells or societies, they are the consequences of living systems. While these objects are not living, they cannot randomly form, as they are the product of a biological organism and hence are either technological or cultural biosignatures. A generalized approach that aims to evaluate complex objects as possible biosignatures could be useful to explore the cosmos for new life forms. However, it is not obvious how it might be possible to create such a self-contained approach. This would require us to prove rigorously that a given artefact is too complex to have formed by chance. In this paper, we present a new type of complexity measure, which we call `Pathway Complexity', that allows us not only to threshold the abiotic-biotic divide, but also to demonstrate a probabilistic approach based on object abundance and complexity which can be used to unambiguously assign complex objects as biosignatures. We hope that this approach will not only open up the search for biosignatures beyond the Earth, but also allow us to explore the Earth for new types of biology, and to determine when a complex chemical system discovered in the laboratory could be considered alive. This article is part of the themed issue 'Reconceptualizing the origins of life'.
Industrial metrology as applied to large physics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veal, D.
1993-05-01
A physics experiment is a large complex 3-D object (typ. 1200 m{sup 3}, 35000 tonnes), with sub-millimetric alignment requirements. Two generic survey alignment tasks can be identified; first, an iterative positioning of the apparatus subsystems in space and, second, a quantification of as-built parameters. The most convenient measurement technique is industrial triangulation but the complexity of the measured object and measurement environment constraints frequently requires a more sophisticated approach. To enlarge the ``survey alignment toolbox`` measurement techniques commonly associated with other disciplines such as geodesy, applied geodesy for accelerator alignment, and mechanical engineering are also used. Disparate observables require amore » heavy reliance on least squares programs for campaign pre-analysis and calculation. This paper will offer an introduction to the alignment of physics experiments and will identify trends for the next generation of SSC experiments.« less
Stand density index as a tool to assess the maximization of forest carbon and biomass
Christopher W. Woodall; Anthony W. D’Amato; John B. Bradford; Andrew O. Finley
2012-01-01
Given the ability of forests to mitigate greenhouse gas emissions and provide feedstocks to energy utilities, there is an emerging need to assess forest biomass/carbon accretion opportunities over large areas. Techniques for objectively quantifying stand stocking of biomass/carbon are lacking for large areas given the complexity of tree species composition in the U.S....
Range 7 Scanner Integration with PaR Robot Scanning System
NASA Technical Reports Server (NTRS)
Schuler, Jason; Burns, Bradley; Carlson, Jeffrey; Minich, Mark
2011-01-01
An interface bracket and coordinate transformation matrices were designed to allow the Range 7 scanner to be mounted on the PaR Robot detector arm for scanning the heat shield or other object placed in the test cell. A process was designed for using Rapid Form XOR to stitch data from multiple scans together to provide an accurate 3D model of the object scanned. An accurate model was required for the design and verification of an existing heat shield. The large physical size and complex shape of the heat shield does not allow for direct measurement of certain features in relation to other features. Any imaging devices capable of imaging the entire heat shield in its entirety suffers a reduced resolution and cannot image sections that are blocked from view. Prior methods involved tools such as commercial measurement arms, taking images with cameras, then performing manual measurements. These prior methods were tedious and could not provide a 3D model of the object being scanned, and were typically limited to a few tens of measurement points at prominent locations. Integration of the scanner with the robot allows for large complex objects to be scanned at high resolution, and for 3D Computer Aided Design (CAD) models to be generated for verification of items to the original design, and to generate models of previously undocumented items. The main components are the mounting bracket for the scanner to the robot and the coordinate transformation matrices used for stitching the scanner data into a 3D model. The steps involve mounting the interface bracket to the robot's detector arm, mounting the scanner to the bracket, and then scanning sections of the object and recording the location of the tool tip (in this case the center of the scanner's focal point). A novel feature is the ability to stitch images together by coordinates instead of requiring each scan data set to have overlapping identifiable features. This setup allows models of complex objects to be developed even if the object is large and featureless, or has sections that don't have visibility to other parts of the object for use as a reference. In addition, millions of points can be used for creation of an accurate model [i.e. within 0.03 in. (=0.8 mm) over a span of 250 in. (=635 mm)].
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
Dependency visualization for complex system understanding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, J. Allison Cory
1994-09-01
With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less
A high-level object-oriented model for representing relationships in an electronic medical record.
Dolin, R. H.
1994-01-01
The importance of electronic medical records to improve the quality and cost-effectiveness of medical care continues to be realized. This growing importance has spawned efforts at defining the structure and content of medical data, which is heterogeneous, highly inter-related, and complex. Computer-assisted data modeling tools have greatly facilitated the process of representing medical data, however the complex inter-relationships of medical information can result in data models that are large and cumbersome to manipulate and view. This report presents a high-level object-oriented model for representing the relationships between objects or entities that might exist in an electronic medical record. By defining the relationship between objects at a high level and providing for inheritance, this model enables relating any medical entity to any other medical entity, even though the relationships were not directly specified or known during data model design. PMID:7949981
Cellular automata with object-oriented features for parallel molecular network modeling.
Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan
2005-06-01
Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.
Some thoughts on the management of large, complex international space ventures
NASA Technical Reports Server (NTRS)
Lee, T. J.; Kutzer, Ants; Schneider, W. C.
1992-01-01
Management issues relevant to the development and deployment of large international space ventures are discussed with particular attention given to previous experience. Management approaches utilized in the past are labeled as either simple or complex, and signs of efficient management are examined. Simple approaches include those in which experiments and subsystems are developed for integration into spacecraft, and the Apollo-Soyuz Test Project is given as an example of a simple multinational approach. Complex approaches include those for ESA's Spacelab Project and the Space Station Freedom in which functional interfaces cross agency and political boundaries. It is concluded that individual elements of space programs should be managed by individual participating agencies, and overall configuration control is coordinated by level with a program director acting to manage overall objectives and project interfaces.
KBGIS-II: A knowledge-based geographic information system
NASA Technical Reports Server (NTRS)
Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj
1986-01-01
The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.
Multi-Object Spectroscopy with MUSE
NASA Astrophysics Data System (ADS)
Kelz, A.; Kamann, S.; Urrutia, T.; Weilbacher, P.; Bacon, R.
2016-10-01
Since 2014, MUSE, the Multi-Unit Spectroscopic Explorer, is in operation at the ESO-VLT. It combines a superb spatial sampling with a large wavelength coverage. By design, MUSE is an integral-field instrument, but its field-of-view and large multiplex make it a powerful tool for multi-object spectroscopy too. Every data-cube consists of 90,000 image-sliced spectra and 3700 monochromatic images. In autumn 2014, the observing programs with MUSE have commenced, with targets ranging from distant galaxies in the Hubble Deep Field to local stellar populations, star formation regions and globular clusters. This paper provides a brief summary of the key features of the MUSE instrument and its complex data reduction software. Some selected examples are given, how multi-object spectroscopy for hundreds of continuum and emission-line objects can be obtained in wide, deep and crowded fields with MUSE, without the classical need for any target pre-selection.
Object Individuation and Physical Reasoning in Infancy: An Integrative Account
Baillargeon, Renée; Stavans, Maayan; Wu, Di; Gertner, Yael; Setoh, Peipei; Kittredge, Audrey K.; Bernard, Amélie
2012-01-01
Much of the research on object individuation in infancy has used a task in which two different objects emerge in alternation from behind a large screen, which is then removed to reveal either one or two objects. In their seminal work, Xu and Carey (1996) found that it is typically not until the end of the first year that infants detect a violation when a single object is revealed. Since then, a large number of investigations have modified the standard task in various ways and found that young infants succeed with some but not with other modifications, yielding a complex and unwieldy picture. In this article, we argue that this confusing picture can be better understood by bringing to bear insights from a related subfield of infancy research, physical reasoning. By considering how infants reason about object information within and across physical events, we can make sense of apparently inconsistent findings from different object-individuation tasks. In turn, object-individuation findings deepen our understanding of how physical reasoning develops in infancy. Integrating the insights from physical-reasoning and object-individuation investigations thus enriches both subfields and brings about a clearer account of how infants represent objects and events. PMID:23204946
Feasibility study for hydrocarbon complex in southern seaboard. Petroleum Authority of Thailand
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This study, conducted by Fluor Daniel, was funded by the U.S. Trade and Development Agency, on behalf of the Petroleum Authority of Thailand. The primary objective of the study was to investigate the economic viability of the related facilities and determine how each could help to industrialize and build up the Southern Seaboard area of Thailand. The focus of the report is in three areas including; Crude Oil Transportation System, Refinery, and Petrochemical Complex. Another objective of the study was to offer an alternative for large crude carrier traffic by proposing the completion of a crude oil pipeline. The reportmore » is divided into the following sections: (1) Executive Summary; (2) Introduction; (3) Crude Oil Transportation System; (4) Refinery Project; (5) Petrochemical Complex; (6) Key Issues & Considerations; (7) Financial Evaluations; (8) Summary & Conclusions.« less
NASA Astrophysics Data System (ADS)
Principe, David A.; Cieza, Lucas; Hales, Antonio; Zurlo, Alice; Williams, Jonathan; Ruíz-Rodríguez, Dary; Canovas, Hector; Casassus, Simon; Mužić, Koraljka; Perez, Sebastian; Tobin, John J.; Zhu, Zhaohuan
2018-01-01
We present Atacama Large Millimeter/sub-millimeter Array (ALMA) observations of the star-forming environment surrounding V1647 Ori, an outbursting FUor/EXor pre-main sequence star. Dust continuum and the (J = 2 - 1) 12CO, 13CO, C18O molecular emission lines were observed to characterize the V1647 Ori circumstellar disc and any large scale molecular features present. We detect continuum emission from the circumstellar disc and determine a radius r = 40 au, inclination i = 17°+6-9 and total disc mass of Mdisc of ∼0.1 M⊙. We do not identify any disc structures associated with nearby companions, massive planets or fragmentation. The molecular cloud environment surrounding V1647 Ori is both structured and complex. We confirm the presence of an excavated cavity north of V1647 Ori and have identified dense material at the base of the optical reflection nebula (McNeil's Nebula) that is actively shaping its surrounding environment. Two distinct outflows have been detected with dynamical ages of ∼11 700 and 17 200 yr. These outflows are misaligned suggesting disc precession over ∼5500 yr as a result of anisotropic accretion events is responsible. The collimated outflows exhibit velocities of ∼2 km s-1, similar in velocity to that of other FUor objects presented in this series, but significantly slower than previous observations and model predictions. The V1647 Ori system is seemingly connected by an 'arm' of material to a large unresolved structure located ∼20 arcsec to the west. The complex environment surrounding V1647 Ori suggests it is in the early stages of star formation, which may relate to its classification as both a FUor and EXor type object.
Implicit Multibody Penalty-BasedDistributed Contact.
Xu, Hongyi; Zhao, Yili; Barbic, Jernej
2014-09-01
The penalty method is a simple and popular approach to resolving contact in computer graphics and robotics. Penalty-based contact, however, suffers from stability problems due to the highly variable and unpredictable net stiffness, and this is particularly pronounced in simulations with time-varying distributed geometrically complex contact. We employ semi-implicit integration, exact analytical contact gradients, symbolic Gaussian elimination and a SVD solver to simulate stable penalty-based frictional contact with large, time-varying contact areas, involving many rigid objects and articulated rigid objects in complex conforming contact and self-contact. We also derive implicit proportional-derivative control forces for real-time control of articulated structures with loops. We present challenging contact scenarios such as screwing a hexbolt into a hole, bowls stacked in perfectly conforming configurations, and manipulating many objects using actively controlled articulated mechanisms in real time.
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.; Chew, Y. T.
2008-07-01
In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.
SimGen: A General Simulation Method for Large Systems.
Taylor, William R
2017-02-03
SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described. Copyright © 2016 The Francis Crick Institute. Published by Elsevier Ltd.. All rights reserved.
Gamifying Video Object Segmentation.
Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela
2017-10-01
Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Benefits of an Object-oriented Database Representation for Controlled Medical Terminologies
Gu, Huanying; Halper, Michael; Geller, James; Perl, Yehoshua
1999-01-01
Objective: Controlled medical terminologies (CMTs) have been recognized as important tools in a variety of medical informatics applications, ranging from patient-record systems to decision-support systems. Controlled medical terminologies are typically organized in semantic network structures consisting of tens to hundreds of thousands of concepts. This overwhelming size and complexity can be a serious barrier to their maintenance and widespread utilization. The authors propose the use of object-oriented databases to address the problems posed by the extensive scope and high complexity of most CMTs for maintenance personnel and general users alike. Design: The authors present a methodology that allows an existing CMT, modeled as a semantic network, to be represented as an equivalent object-oriented database. Such a representation is called an object-oriented health care terminology repository (OOHTR). Results: The major benefit of an OOHTR is its schema, which provides an important layer of structural abstraction. Using the high-level view of a CMT afforded by the schema, one can gain insight into the CMT's overarching organization and begin to better comprehend it. The authors' methodology is applied to the Medical Entities Dictionary (MED), a large CMT developed at Columbia-Presbyterian Medical Center. Examples of how the OOHTR schema facilitated updating, correcting, and improving the design of the MED are presented. Conclusion: The OOHTR schema can serve as an important abstraction mechanism for enhancing comprehension of a large CMT, and thus promotes its usability. PMID:10428002
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.
Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz
2012-01-01
Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.
Advanced functional network analysis in the geosciences: The pyunicorn package
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen
2013-04-01
Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.
Automatic trajectory measurement of large numbers of crowded objects
NASA Astrophysics Data System (ADS)
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
Handling Practicalities in Agricultural Policy Optimization for Water Quality Improvements
Bilevel and multi-objective optimization methods are often useful to spatially target agri-environmental policy throughout a watershed. This type of problem is complex and is comprised of a number of practicalities: (i) a large number of decision variables, (ii) at least two inte...
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback
NASA Astrophysics Data System (ADS)
Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai
2012-01-01
With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.
Progress in developing Poisson-Boltzmann equation solvers
Li, Chuan; Li, Lin; Petukh, Marharyta; Alexov, Emil
2013-01-01
This review outlines the recent progress made in developing more accurate and efficient solutions to model electrostatics in systems comprised of bio-macromolecules and nano-objects, the last one referring to objects that do not have biological function themselves but nowadays are frequently used in biophysical and medical approaches in conjunction with bio-macromolecules. The problem of modeling macromolecular electrostatics is reviewed from two different angles: as a mathematical task provided the specific definition of the system to be modeled and as a physical problem aiming to better capture the phenomena occurring in the real experiments. In addition, specific attention is paid to methods to extend the capabilities of the existing solvers to model large systems toward applications of calculations of the electrostatic potential and energies in molecular motors, mitochondria complex, photosynthetic machinery and systems involving large nano-objects. PMID:24199185
Multi-camera digital image correlation method with distributed fields of view
NASA Astrophysics Data System (ADS)
Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata
2017-11-01
A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.
Server-Side JavaScript Debugging: Viewing the Contents of an Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-04-21
JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
USDA-ARS?s Scientific Manuscript database
The science of Insect Pathology encompasses a diverse assemblage of pathogens from a large and varied group of hosts. Microscopy techniques and protocols for these organisms are complex and varied and often require modifications and adaptations of standard procedures. The objective of this chapter...
75 FR 47606 - Strategic Plan for Consumer Education via Cooperative Agreement (U18)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... or quantitative research with stakeholders and meetings with stakeholder groups and consumer experts... and resulting from an extensive consumer research process. In 2007, PFSE joined with USDA to create... responsibilities of FDA. B. Research Objectives PFSE supports a large, complex, and multi-faceted consumer food...
DOE Office of Scientific and Technical Information (OSTI.GOV)
MYERS DA
This report documents the results of preliminary surface geophysical exploration activities performed between October and December 2006 at the B, BX, and BY tank farms (B Complex). The B Complex is located in the 200 East Area of the U. S. Department of Energy's Hanford Site in Washington State. The objective of the preliminary investigation was to collect background characterization information with magnetic gradiometry and electromagnetic induction to understand the spatial distribution of metallic objects that could potentially interfere with the results from high resolution resistivity survey. Results of the background characterization show there are several areas located around themore » site with large metallic subsurface debris or metallic infrastructure.« less
Remediation management of complex sites using an adaptive site management approach.
Price, John; Spreng, Carl; Hawley, Elisabeth L; Deeb, Rula
2017-12-15
Complex sites require a disproportionate amount of resources for environmental remediation and long timeframes to achieve remediation objectives, due to their complex geologic conditions, hydrogeologic conditions, geochemical conditions, contaminant-related conditions, large scale of contamination, and/or non-technical challenges. A recent team of state and federal environmental regulators, federal agency representatives, industry experts, community stakeholders, and academia worked together as an Interstate Technology & Regulatory Council (ITRC) team to compile resources and create new guidance on the remediation management of complex sites. This article summarizes the ITRC team's recommended process for addressing complex sites through an adaptive site management approach. The team provided guidance for site managers and other stakeholders to evaluate site complexities and determine site remediation potential, i.e., whether an adaptive site management approach is warranted. Adaptive site management was described as a comprehensive, flexible approach to iteratively evaluate and adjust the remedial strategy in response to remedy performance. Key aspects of adaptive site management were described, including tools for revising and updating the conceptual site model (CSM), the importance of setting interim objectives to define short-term milestones on the journey to achieving site objectives, establishing a performance model and metrics to evaluate progress towards meeting interim objectives, and comparing actual with predicted progress during scheduled periodic evaluations, and establishing decision criteria for when and how to adapt/modify/revise the remedial strategy in response to remedy performance. Key findings will be published in an ITRC Technical and Regulatory guidance document in 2017 and free training webinars will be conducted. More information is available at www.itrc-web.org. Copyright © 2017 Elsevier Ltd. All rights reserved.
Java Performance for Scientific Applications on LLNL Computer Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapfer, C; Wissink, A
2002-05-10
Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
On sine dwell or broadband methods for modal testing
NASA Technical Reports Server (NTRS)
Chen, Jay-Chung; Wada, Ben K.
1987-01-01
For large, complex spacecraft structural systems, the objectives of the modal test are outlined. Based on these objectives, the comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantages or disadvantages of each method are examined. The usefulness or shortcoming of the methods are given from a practicing engineer's view point.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.
Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias
2017-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye
Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias
2018-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412
Considering Complex Objectives and Scarce Resources in Information Systems' Analysis.
ERIC Educational Resources Information Center
Crowther, Warren
The low efficacy of many of the library and large-scale information systems that have been implemented in the developing countries has been disappointing, and their appropriateness is often questioned in the governmental and educational institutions of more industrialized countries beset by budget-crunching and a very dynamic transformation of…
Co-Creating a Tailored Public Health Intervention to Reduce Older Adults' Sedentary Behaviour
ERIC Educational Resources Information Center
Leask, Calum F.; Sandlund, Marlene; Skelton, Dawn A.; Chastin, Sebastien F. M.
2017-01-01
Objective: The increasing health care costs associated with an ageing population and chronic disease burden are largely attributable to modifiable lifestyle factors that are complex and vary between individuals and settings. Traditional approaches to promoting healthy lifestyles have so far had limited success. Recently, co-creating public health…
Kevin C. Vogler; Alan A. Ager; Michelle A. Day; Michael Jennings; John D. Bailey
2015-01-01
The implementation of US federal forest restoration programs on national forests is a complex process that requires balancing diverse socioecological goals with project economics. Despite both the large geographic scope and substantial investments in restoration projects, a quantitative decision support framework to locate optimal project areas and examine...
3D Visible-Light Invisibility Cloak.
Zheng, Bin; Zhu, Rongrong; Jing, Liqiao; Yang, Yihao; Shen, Lian; Wang, Huaping; Wang, Zuojia; Zhang, Xianmin; Liu, Xu; Li, Erping; Chen, Hongsheng
2018-06-01
The concept of an invisibility cloak is a fixture of science fiction, fantasy, and the collective imagination. However, a real device that can hide an object from sight in visible light from absolutely any viewpoint would be extremely challenging to build. The main obstacle to creating such a cloak is the coupling of the electromagnetic components of light, which would necessitate the use of complex materials with specific permittivity and permeability tensors. Previous cloaking solutions have involved circumventing this obstacle by functioning either in static (or quasistatic) fields where these electromagnetic components are uncoupled or in diffusive light scattering media where complex materials are not required. In this paper, concealing a large-scale spherical object from human sight from three orthogonal directions is reported. This result is achieved by developing a 3D homogeneous polyhedral transformation and a spatially invariant refractive index discretization that considerably reduce the coupling of the electromagnetic components of visible light. This approach allows for a major simplification in the design of 3D invisibility cloaks, which can now be created at a large scale using homogeneous and isotropic materials.
[Parametabolism as Non-Specific Modifier of Supramolecular Interactions in Living Systems].
Kozlov, V A; Sapozhnikov, S P; Sheptuhina, A I; Golenkov, A V
2015-01-01
As it became known recently, in addition to the enzyme (enzymes and/or ribozymes) in living organisms occur a large number of ordinary chemical reactions without the participation of biological catalysts. These reactions are distinguished by low speed and, as a rule, the irreversibility. For example, along with diabetes mellitus, glycation and fructosilation of proteins are observed resulted in posttranslational modification with the low- or nonfunctioning protein formation which is poorly exposed to enzymatic proteolysis and therefore accumulates in the body. In addition, the known processes such as the nonenzymatic carbomoylation, pyridoxylation and thiamiation proteins. There is a reasonable basis to believe that alcoholic injury also realized through parametabolic secondary metabolites synthesis such as acetaldehyde. At the same time, the progress in supramolecular chemistry proves that in biological objects there is another large group ofparametabolic reactions caused by the formation of supramolecular complexes. Obviously, known parameterizes interactions can modify the formation of supramolecular complexes in living objects. These processes are of considerable interest for fundamental biology and fundamental and practical medicine, but they remain unexplored due to a lack of awareness of a wide range of researchers.
NASA Technical Reports Server (NTRS)
Djorgovski, George
1993-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.
NASA Technical Reports Server (NTRS)
Djorgovski, Stanislav
1992-01-01
The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.
Bayesian approach to MSD-based analysis of particle motion in live cells.
Monnier, Nilah; Guo, Syuan-Ming; Mori, Masashi; He, Jun; Lénárt, Péter; Bathe, Mark
2012-08-08
Quantitative tracking of particle motion using live-cell imaging is a powerful approach to understanding the mechanism of transport of biological molecules, organelles, and cells. However, inferring complex stochastic motion models from single-particle trajectories in an objective manner is nontrivial due to noise from sampling limitations and biological heterogeneity. Here, we present a systematic Bayesian approach to multiple-hypothesis testing of a general set of competing motion models based on particle mean-square displacements that automatically classifies particle motion, properly accounting for sampling limitations and correlated noise while appropriately penalizing model complexity according to Occam's Razor to avoid over-fitting. We test the procedure rigorously using simulated trajectories for which the underlying physical process is known, demonstrating that it chooses the simplest physical model that explains the observed data. Further, we show that computed model probabilities provide a reliability test for the downstream biological interpretation of associated parameter values. We subsequently illustrate the broad utility of the approach by applying it to disparate biological systems including experimental particle trajectories from chromosomes, kinetochores, and membrane receptors undergoing a variety of complex motions. This automated and objective Bayesian framework easily scales to large numbers of particle trajectories, making it ideal for classifying the complex motion of large numbers of single molecules and cells from high-throughput screens, as well as single-cell-, tissue-, and organism-level studies. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
HIGH-EFFICIENCY AUTONOMOUS LASER ADAPTIVE OPTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baranec, Christoph; Riddle, Reed; Tendulkar, Shriharsh
2014-07-20
As new large-scale astronomical surveys greatly increase the number of objects targeted and discoveries made, the requirement for efficient follow-up observations is crucial. Adaptive optics imaging, which compensates for the image-blurring effects of Earth's turbulent atmosphere, is essential for these surveys, but the scarcity, complexity and high demand of current systems limit their availability for following up large numbers of targets. To address this need, we have engineered and implemented Robo-AO, a fully autonomous laser adaptive optics and imaging system that routinely images over 200 objects per night with an acuity 10 times sharper at visible wavelengths than typically possible frommore » the ground. By greatly improving the angular resolution, sensitivity, and efficiency of 1-3 m class telescopes, we have eliminated a major obstacle in the follow-up of the discoveries from current and future large astronomical surveys.« less
NASA Astrophysics Data System (ADS)
Reis, Itamar; Poznanski, Dovi; Baron, Dalya; Zasowski, Gail; Shahaf, Sahar
2018-05-01
In this work, we apply and expand on a recently introduced outlier detection algorithm that is based on an unsupervised random forest. We use the algorithm to calculate a similarity measure for stellar spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE). We show that the similarity measure traces non-trivial physical properties and contains information about complex structures in the data. We use it for visualization and clustering of the data set, and discuss its ability to find groups of highly similar objects, including spectroscopic twins. Using the similarity matrix to search the data set for objects allows us to find objects that are impossible to find using their best-fitting model parameters. This includes extreme objects for which the models fail, and rare objects that are outside the scope of the model. We use the similarity measure to detect outliers in the data set, and find a number of previously unknown Be-type stars, spectroscopic binaries, carbon rich stars, young stars, and a few that we cannot interpret. Our work further demonstrates the potential for scientific discovery when combining machine learning methods with modern survey data.
Imaging Young Stellar Objects with VLTi/PIONIER
NASA Astrophysics Data System (ADS)
Kluska, J.; Malbet, F.; Berger, J.-P.; Benisty, M.; Lazareff, B.; Le Bouquin, J.-B.; Baron, F.; Dominik, C.; Isella, A.; Juhasz, A.; Kraus, S.; Lachaume, R.; Ménard, F.; Millan-Gabet, R.; Monnier, J.; Pinte, C.; Soulez, F.; Tallon, M.; Thi, W.-F.; Thiébaut, É.; Zins, G.
2014-04-01
Optical interferometry imaging is designed to help us to reveal complex astronomical sources without a prior model. Among these complex objects are the young stars and their environments, which have a typical morphology with a point-like source, surrounded by circumstellar material with unknown morphology. To image them, we have developed a numerical method that removes completely the stellar point source and reconstructs the rest of the image, using the differences in the spectral behavior between the star and its circumstellar material. We aim to reveal the first Astronomical Units of these objects where many physical phenomena could interplay: the dust sublimation causing a puffed-up inner rim, a dusty halo, a dusty wind or an inner gaseous component. To investigate more deeply these regions, we carried out the first Large Program survey of HAeBe stars with two main goals: statistics on the geometry of these objects at the first astronomical unit scale and imaging their very close environment. The images reveal the environment, which is not polluted by the star and allows us to derive the best fit for the flux ratio and the spectral slope. We present the first images from this survey and the application of the imaging method on other astronomical objects.
Effects of Voice Harmonic Complexity on ERP Responses to Pitch-Shifted Auditory Feedback
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2011-01-01
Objective The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Methods Event-related potentials (ERPs) were recorded in response to +200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. Results During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. Conclusions These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. Significance This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. PMID:21719346
Complexity in language acquisition.
Clark, Alexander; Lappin, Shalom
2013-01-01
Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form. Copyright © 2013 Cognitive Science Society, Inc.
The Earliest Lead Object in the Levant
Yahalom-Mack, Naama; Langgut, Dafna; Dvir, Omri; Tirosh, Ofir; Eliyahu-Behar, Adi; Erel, Yigal; Langford, Boaz; Frumkin, Amos; Ullman, Mika; Davidovich, Uri
2015-01-01
In the deepest section of a large complex cave in the northern Negev desert, Israel, a bi-conical lead object was found logged onto a wooden shaft. Associated material remains and radiocarbon dating of the shaft place the object within the Late Chalcolithic period, at the late 5th millennium BCE. Based on chemical and lead isotope analysis, we show that this unique object was made of almost pure metallic lead, likely smelted from lead ores originating in the Taurus range in Anatolia. Either the finished object, or the raw material, was brought to the southern Levant, adding another major component to the already-rich Late Chalcolithic metallurgical corpus known to-date. The paper also discusses possible uses of the object, suggesting that it may have been used as a spindle whorl, at least towards its deposition. PMID:26630666
The Earliest Lead Object in the Levant.
Yahalom-Mack, Naama; Langgut, Dafna; Dvir, Omri; Tirosh, Ofir; Eliyahu-Behar, Adi; Erel, Yigal; Langford, Boaz; Frumkin, Amos; Ullman, Mika; Davidovich, Uri
2015-01-01
In the deepest section of a large complex cave in the northern Negev desert, Israel, a bi-conical lead object was found logged onto a wooden shaft. Associated material remains and radiocarbon dating of the shaft place the object within the Late Chalcolithic period, at the late 5th millennium BCE. Based on chemical and lead isotope analysis, we show that this unique object was made of almost pure metallic lead, likely smelted from lead ores originating in the Taurus range in Anatolia. Either the finished object, or the raw material, was brought to the southern Levant, adding another major component to the already-rich Late Chalcolithic metallurgical corpus known to-date. The paper also discusses possible uses of the object, suggesting that it may have been used as a spindle whorl, at least towards its deposition.
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
System for decision analysis support on complex waste management issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shropshire, D.E.
1997-10-01
A software system called the Waste Flow Analysis has been developed and applied to complex environmental management processes for the United States Department of Energy (US DOE). The system can evaluate proposed methods of waste retrieval, treatment, storage, transportation, and disposal. Analysts can evaluate various scenarios to see the impacts to waste slows and schedules, costs, and health and safety risks. Decision analysis capabilities have been integrated into the system to help identify preferred alternatives based on a specific objectives may be to maximize the waste moved to final disposition during a given time period, minimize health risks, minimize costs,more » or combinations of objectives. The decision analysis capabilities can support evaluation of large and complex problems rapidly, and under conditions of variable uncertainty. The system is being used to evaluate environmental management strategies to safely disposition wastes in the next ten years and reduce the environmental legacy resulting from nuclear material production over the past forty years.« less
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Energy Center Structure Optimization by using Smart Technologies in Process Control System
NASA Astrophysics Data System (ADS)
Shilkina, Svetlana V.
2018-03-01
The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.
KBGIS-2: A knowledge-based geographic information system
NASA Technical Reports Server (NTRS)
Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.
1986-01-01
The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.
Inertial objects in complex flows
NASA Astrophysics Data System (ADS)
Syed, Rayhan; Ho, George; Cavas, Samuel; Bao, Jialun; Yecko, Philip
2017-11-01
Chaotic Advection and Finite Time Lyapunov Exponents both describe stirring and transport in complex and time-dependent flows, but FTLE analysis has been largely limited to either purely kinematic flow models or high Reynolds number flow field data. The neglect of dynamic effects in FTLE and Lagrangian Coherent Structure studies has stymied detailed information about the role of pressure, Coriolis effects and object inertia. We present results of laboratory and numerical experiments on time-dependent and multi-gyre Stokes flows. In the lab, a time-dependent effectively two-dimensional low Re flow is used to distinguish transport properties of passive tracer from those of small paramagnetic spheres. Companion results of FTLE calculations for inertial particles in a time-dependent multi-gyre flow are presented, illustrating the critical roles of density, Stokes number and Coriolis forces on their transport. Results of Direct Numerical Simulations of fully resolved inertial objects (spheroids) immersed in a three dimensional (ABC) flow show the role of shape and finite size in inertial transport at small finite Re. We acknowledge support of NSF DMS-1418956.
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Etoile Project : Social Intelligent ICT-System for very large scale education in complex systems
NASA Astrophysics Data System (ADS)
Bourgine, P.; Johnson, J.
2009-04-01
The project will devise new theory and implement new ICT-based methods of delivering high-quality low-cost postgraduate education to many thousands of people in a scalable way, with the cost of each extra student being negligible (< a few Euros). The research will create an in vivo laboratory of one to ten thousand postgraduate students studying courses in complex systems. This community is chosen because it is large and interdisciplinary and there is a known requirement for courses for thousand of students across Europe. The project involves every aspect of course production and delivery. Within this the research focused on the creation of a Socially Intelligent Resource Mining system to gather large volumes of high quality educational resources from the internet; new methods to deconstruct these to produce a semantically tagged Learning Object Database; a Living Course Ecology to support the creation and maintenance of evolving course materials; systems to deliver courses; and a ‘socially intelligent assessment system'. The system will be tested on one to ten thousand postgraduate students in Europe working towards the Complex System Society's title of European PhD in Complex Systems. Étoile will have a very high impact both scientifically and socially by (i) the provision of new scalable ICT-based methods for providing very low cost scientific education, (ii) the creation of new mathematical and statistical theory for the multiscale dynamics of complex systems, (iii) the provision of a working example of adaptation and emergence in complex socio-technical systems, and (iv) making a major educational contribution to European complex systems science and its applications.
A Parameterized Pattern-Error Objective for Large-Scale Phase-Only Array Pattern Design
2016-03-21
12 4.4 Example 3: Sector Beam w/ Nonuniform Amplitude...fixed uniform amplitude illumination, phase-only optimization can also find application to arrays with fixed but nonuniform tapers. Such fixed tapers...arbitrary element locations nonuniform FFT algorithms exist [43–45] that have the same asymptotic complexity as the conventional FFT, although the
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Secondary Curriculum Development.
Designed to prepare students to be engine mechanics working on automotive and large stationary diesel engines, this instructor's guide contains eight units arranged from simple to complex to facilitate student learning. Each contains behavioral objectives, a content outline, understandings and teaching approaches necessary to develop the content,…
Exome genotyping, linkage disequilibrium and population structure in loblolly pine (Pinus taeda L.)
Mengmeng Lu; Konstantin V. Krutovsky; C. Dana Nelson; Tomasz E. Koralewski; Thomas D. Byram; Carol A. Loopstra
2016-01-01
Background: Loblolly pine (Pinus taeda L.) is one of the most widely planted and commercially important foresttree species in the USA and worldwide, and is an object of intense genomic research. However, whole genomeresequencing in loblolly pine is hampered by its large size and complexity and a lack of a good...
A NASTRAN Model of a Large Flexible Swing-Wing Bomber. Volume 1: NASTRAN Model Plane
NASA Technical Reports Server (NTRS)
Mock, W. D.
1982-01-01
A review was conducted of B-1 aircraft no. 2 (A/C-2) internal loads models to determine the minimum model complexity necessary to fulfill all of the airloads research study objectives. Typical model sizings were tabulated at selected vehicle locations, and scale layouts were prepared of the NASTRAN structural analysis model.
Ecological Origins of Object Salience: Reward, Uncertainty, Aversiveness, and Novelty
Ghazizadeh, Ali; Griggs, Whitney; Hikosaka, Okihide
2016-01-01
Among many objects around us, some are more salient than others (i.e., attract our attention automatically). Some objects may be inherently salient (e.g., brighter), while others may become salient by virtue of their ecological relevance through experience. However, the role of ecological experience in automatic attention has not been studied systematically. To address this question, we let subjects (macaque monkeys) view a large number of complex objects (>300), each experienced repeatedly (>5 days) with rewarding, aversive or no outcome association (mere-perceptual exposure). Test of salience was done on separate days using free viewing with no outcome. We found that gaze was biased among the objects from the outset, affecting saccades to objects or fixations within objects. When the outcome was rewarding, gaze preference was stronger (i.e., positive) for objects with larger or equal but uncertain rewards. The effects of aversive outcomes were variable. Gaze preference was positive for some outcome associations (e.g., airpuff), but negative for others (e.g., time-out), possibly due to differences in threat levels. Finally, novel objects attracted gaze, but mere perceptual exposure of objects reduced their salience (learned negative salience). Our results show that, in primates, object salience is strongly influenced by previous ecological experience and is supported by a large memory capacity. Owing to such high capacity for learned salience, the ability to rapidly choose important objects can grow during the entire life to promote biological fitness. PMID:27594825
Intelligent mobility research for robotic locomotion in complex terrain
NASA Astrophysics Data System (ADS)
Trentini, Michael; Beckman, Blake; Digney, Bruce; Vincent, Isabelle; Ricard, Benoit
2006-05-01
The objective of the Autonomous Intelligent Systems Section of Defence R&D Canada - Suffield is best described by its mission statement, which is "to augment soldiers and combat systems by developing and demonstrating practical, cost effective, autonomous intelligent systems capable of completing military missions in complex operating environments." The mobility requirement for ground-based mobile systems operating in urban settings must increase significantly if robotic technology is to augment human efforts in these roles and environments. The intelligence required for autonomous systems to operate in complex environments demands advances in many fields of robotics. This has resulted in large bodies of research in areas of perception, world representation, and navigation, but the problem of locomotion in complex terrain has largely been ignored. In order to achieve its objective, the Autonomous Intelligent Systems Section is pursuing research that explores the use of intelligent mobility algorithms designed to improve robot mobility. Intelligent mobility uses sensing, control, and learning algorithms to extract measured variables from the world, control vehicle dynamics, and learn by experience. These algorithms seek to exploit available world representations of the environment and the inherent dexterity of the robot to allow the vehicle to interact with its surroundings and produce locomotion in complex terrain. The primary focus of the paper is to present the intelligent mobility research within the framework of the research methodology, plan and direction defined at Defence R&D Canada - Suffield. It discusses the progress and future direction of intelligent mobility research and presents the research tools, topics, and plans to address this critical research gap. This research will create effective intelligence to improve the mobility of ground-based mobile systems operating in urban settings to assist the Canadian Forces in their future urban operations.
Evidence of Lake Trout reproduction at Lake Michigan's mid-lake reef complex
Janssen, J.; Jude, D.J.; Edsall, T.A.; Paddock, R.W.; Wattrus, N.; Toneys, M.; McKee, P.
2006-01-01
The Mid-Lake Reef Complex (MLRC), a large area of deep (> 40 m) reefs, was a major site where indigenous lake trout (Salvelinus namaycush) in Lake Michigan aggregated during spawning. As part of an effort to restore Lake Michigan's lake trout, which were extirpated in the 1950s, yearling lake trout have been released over the MLRC since the mid-1980s and fall gill net censuses began to show large numbers of lake trout in spawning condition beginning about 1999. We report the first evidence of viable egg deposition and successful lake trout fry production at these deep reefs. Because the area's existing bathymetry and habitat were too poorly known for a priori selection of sampling sites, we used hydroacoustics to locate concentrations of large fish in the fall; fish were congregating around slopes and ridges. Subsequent observations via unmanned submersible confirmed the large fish to be lake trout. Our technological objectives were driven by biological objectives of locating where lake trout spawn, where lake trout fry were produced, and what fishes ate lake trout eggs and fry. The unmanned submersibles were equipped with a suction sampler and electroshocker to sample eggs deposited on the reef, draw out and occasionally catch emergent fry, and collect egg predators (slimy sculpin Cottus cognatus). We observed slimy sculpin to eat unusually high numbers of lake trout eggs. Our qualitative approaches are a first step toward quantitative assessments of the importance of lake trout spawning on the MLRC.
Developing science gateways for drug discovery in a grid environment.
Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra
2016-01-01
Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.
Study of the formation of soluble complexes of sodium caseinate and xanthan in solution.
Bouhannache, Bouchra; HadjSadok, Abdelkader; Touabet, Abdelkrim
2017-09-01
The main objective of this work was to determinate the optimum conditions for the formation of soluble complexes between sodium caseinate and xanthan in solution at neutral pH, in the presence of the NaCl. The study of the influence of the concentrations of these three substances showed that salt was the most influent factor. It worsens the thermodynamic incompatibility of the two biopolymers in solution, when they are present at large amounts. However, it contributes to soluble complexes formation, when sodium caseinate concentration is below 5.5%. In this case, gels with enhanced rheological properties were obtained. Infrared spectroscopy confirmed that the complexes formation within these gels involves hydrophobic interactions. On the other hand, dynamic light scattering revealed that dilution cause their dissociation. These soluble complexes are promising ingredients to ensure new texturing properties.
Sensitivity of Precipitation in Coupled Land-Atmosphere Models
NASA Technical Reports Server (NTRS)
Neelin, David; Zeng, N.; Suarez, M.; Koster, R.
2004-01-01
The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.
Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm
NASA Astrophysics Data System (ADS)
Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.
2018-03-01
Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective
Burles, Ford; Slone, Edward; Iaria, Giuseppe
2017-04-01
The retrosplenial complex is a region within the posterior cingulate cortex implicated in spatial navigation. Here, we investigated the functional specialization of this large and anatomically heterogeneous region using fMRI and resting-state functional connectivity combined with a spatial task with distinct phases of spatial 'updating' (i.e., integrating and maintaining object locations in memory during spatial displacement) and 'orienting' (i.e., recalling unseen locations from current position in space). Both spatial 'updating' and 'orienting' produced bilateral activity in the retrosplenial complex, among other areas. However, spatial 'updating' produced slightly greater activity in ventro-lateral portions, of the retrosplenial complex, whereas spatial 'orienting' produced greater activity in a more dorsal and medial portion of it (both regions localized along the parieto-occipital fissure). At rest, both ventro-lateral and dorso-medial subregions of the retrosplenial complex were functionally connected to the hippocampus and parahippocampus, regions both involved in spatial orientation and navigation. However, the ventro-lateral subregion of the retrosplenial complex displayed more positive functional connectivity with ventral occipital and temporal object recognition regions, whereas the dorso-medial subregion activity was more correlated to dorsal activity and frontal activity, as well as negatively correlated with more ventral parietal structures. These findings provide evidence for a dorso-medial to ventro-lateral functional specialization within the human retrosplenial complex that may shed more light on the complex neural mechanisms underlying spatial orientation and navigation in humans.
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Advanced Hypervelocity Aerophysics Facility Workshop
NASA Technical Reports Server (NTRS)
Witcofski, Robert D. (Compiler); Scallion, William I. (Compiler)
1989-01-01
The primary objective of the workshop was to obtain a critical assessment of a concept for a large, advanced hypervelocity ballistic range test facility powered by an electromagnetic launcher, which was proposed by the Langley Research Center. It was concluded that the subject large-scale facility was feasible and would provide the required ground-based capability for performing tests at entry flight conditions (velocity and density) on large, complex, instrumented models. It was also concluded that advances in remote measurement techniques and particularly onboard model instrumentation, light-weight model construction techniques, and model electromagnetic launcher (EML) systems must be made before any commitment for the construction of such a facility can be made.
Large Bodies Associated with Meteoroid Streams
NASA Technical Reports Server (NTRS)
Badadzhanov, P. B.; William, I. P.; Kokhirova, G. I.
2011-01-01
It is now accepted that some near-Earth objects (NEOs) may be dormant or dead comets. One strong indicator of cometary nature is the existence of an associated meteoroid stream with its consequently observed meteor showers. The complexes of NEOs which have very similar orbits and a likely common progenitor have been identified. The theoretical parameters for any meteor shower that may be associated with these complexes were calculated. As a result of a search of existing catalogues of meteor showers, activity has been observed corresponding to each of the theoretically predicted showers was found. We conclude that these asteroid-meteoroid complexes of four NEOs moving within the Piscids stream, three NEOs moving within the Iota Aquariids stream, and six new NEOs added to the Taurid complex are the result of a cometary break-up.
REVIEWS OF TOPICAL PROBLEMS: Axisymmetric stationary flows in compact astrophysical objects
NASA Astrophysics Data System (ADS)
Beskin, Vasilii S.
1997-07-01
A review is presented of the analytical results available for a large class of axisymmetric stationary flows in the vicinity of compact astrophysical objects. The determination of the two-dimensional structure of the poloidal magnetic field (hydrodynamic flow field) faces severe difficulties, due to the complexity of the trans-field equation for stationary axisymmetric flows. However, an approach exists which enables direct problems to be solved even within the balance law framework. This possibility arises when an exact solution to the equation is available and flows close to it are investigated. As a result, with the use of simple model problems, the basic features of supersonic flows past real compact objects are determined.
The Gould's Belt Very Large Array Survey. I. The Ophiuchus Complex
NASA Astrophysics Data System (ADS)
Dzib, Sergio A.; Loinard, Laurent; Mioduszewski, Amy J.; Rodríguez, Luis F.; Ortiz-León, Gisela N.; Pech, Gerardo; Rivera, Juana L.; Torres, Rosa M.; Boden, Andrew F.; Hartmann, Lee; Evans, Neal J., II; Briceño, Cesar; Tobin, John
2013-09-01
We present large-scale (~2000 arcmin2), deep (~20 μJy), high-resolution (~1'') radio observations of the Ophiuchus star-forming complex obtained with the Karl G. Jansky Very Large Array at λ = 4 and 6 cm. In total, 189 sources were detected, 56 of them associated with known young stellar sources, and 4 with known extragalactic objects; the other 129 remain unclassified, but most of them are most probably background quasars. The vast majority of the young stars detected at radio wavelengths have spectral types K or M, although we also detect four objects of A/F/B types and two brown dwarf candidates. At least half of these young stars are non-thermal (gyrosynchrotron) sources, with active coronas characterized by high levels of variability, negative spectral indices, and (in some cases) significant circular polarization. As expected, there is a clear tendency for the fraction of non-thermal sources to increase from the younger (Class 0/I or flat spectrum) to the more evolved (Class III or weak line T Tauri) stars. The young stars detected both in X-rays and at radio wavelengths broadly follow a Güdel-Benz relation, but with a different normalization than the most radioactive types of stars. Finally, we detect a ~70 mJy compact extragalactic source near the center of the Ophiuchus core, which should be used as gain calibrator for any future radio observations of this region.
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Large space telescope engineering scale model optical design
NASA Technical Reports Server (NTRS)
Facey, T. A.
1973-01-01
The objective is to develop the detailed design and tolerance data for the LST engineering scale model optical system. This will enable MSFC to move forward to the optical element procurement phase and also to evaluate tolerances, manufacturing requirements, assembly/checkout procedures, reliability, operational complexity, stability requirements of the structure and thermal system, and the flexibility to change and grow.
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
The Gould's Belt Very Large Array Survey. IV. The Taurus-Auriga Complex
NASA Astrophysics Data System (ADS)
Dzib, Sergio A.; Loinard, Laurent; Rodríguez, Luis F.; Mioduszewski, Amy J.; Ortiz-León, Gisela N.; Kounkel, Marina A.; Pech, Gerardo; Rivera, Juana L.; Torres, Rosa M.; Boden, Andrew F.; Hartmann, Lee; Evans, Neal J., II; Briceño, Cesar; Tobin, John
2015-03-01
We present a multi-epoch radio study of the Taurus-Auriga star-forming complex made with the Karl G. Jansky Very Large Array at frequencies of 4.5 GHz and 7.5 GHz. We detect a total of 610 sources, 59 of which are related to young stellar objects (YSOs) and 18 to field stars. The properties of 56% of the young stars are compatible with non-thermal radio emission. We also show that the radio emission of more evolved YSOs tends to be more non-thermal in origin and, in general, that their radio properties are compatible with those found in other star-forming regions. By comparing our results with previously reported X-ray observations, we notice that YSOs in Taurus-Auriga follow a Güdel-Benz relation with κ = 0.03, as we previously suggested for other regions of star formation. In general, YSOs in Taurus-Auriga and in all the previous studied regions seem to follow this relation with a dispersion of ~1 dex. Finally, we propose that most of the remaining sources are related with extragalactic objects but provide a list of 46 unidentified radio sources whose radio properties are compatible with a YSO nature.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Uncovering hidden nodes in complex networks in the presence of noise
Su, Ri-Qi; Lai, Ying-Cheng; Wang, Xiao; Do, Younghae
2014-01-01
Ascertaining the existence of hidden objects in a complex system, objects that cannot be observed from the external world, not only is curiosity-driven but also has significant practical applications. Generally, uncovering a hidden node in a complex network requires successful identification of its neighboring nodes, but a challenge is to differentiate its effects from those of noise. We develop a completely data-driven, compressive-sensing based method to address this issue by utilizing complex weighted networks with continuous-time oscillatory or discrete-time evolutionary-game dynamics. For any node, compressive sensing enables accurate reconstruction of the dynamical equations and coupling functions, provided that time series from this node and all its neighbors are available. For a neighboring node of the hidden node, this condition cannot be met, resulting in abnormally large prediction errors that, counterintuitively, can be used to infer the existence of the hidden node. Based on the principle of differential signal, we demonstrate that, when strong noise is present, insofar as at least two neighboring nodes of the hidden node are subject to weak background noise only, unequivocal identification of the hidden node can be achieved. PMID:24487720
Jing, Xia; Cimino, James J.
2011-01-01
Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119
A Novel Interdisciplinary Approach to Socio-Technical Complexity
NASA Astrophysics Data System (ADS)
Bassetti, Chiara
The chapter presents a novel interdisciplinary approach that integrates micro-sociological analysis into computer-vision and pattern-recognition modeling and algorithms, the purpose being to tackle socio-technical complexity at a systemic yet micro-grounded level. The approach is empirically-grounded and both theoretically- and analytically-driven, yet systemic and multidimensional, semi-supervised and computable, and oriented towards large scale applications. The chapter describes the proposed approach especially as for its sociological foundations, and as applied to the analysis of a particular setting --i.e. sport-spectator crowds. Crowds, better defined as large gatherings, are almost ever-present in our societies, and capturing their dynamics is crucial. From social sciences to public safety management and emergency response, modeling and predicting large gatherings' presence and dynamics, thus possibly preventing critical situations and being able to properly react to them, is fundamental. This is where semi/automated technologies can make the difference. The work presented in this chapter is intended as a scientific step towards such an objective.
Cataldo, Franco; Keheyan, Yeghis; Heymann, Dieter
2004-02-01
In this communication we present the basic concept that the pure PAHs (Polycyclic Aromatic Hydrocarbons) can be considered only the ideal carriers of the UIBs (Unidentified Infrared Bands), the emission spectra coming from a large variety of astronomical objects. Instead we have proposed that the carriers of UIBs and of protoplanetary nebulae (PPNe) emission spectra are much more complex molecular mixtures possessing also complex chemical structures comparable to certain petroleum fractions obtained from the petroleum refining processes. The demonstration of our proposal is based on the comparison between the emission spectra recorded from the protoplanetary nebulae (PPNe) IRAS 22272+ 5435 and the infrared absorption spectra of certain 'heavy' petroleum fractions. It is shown that the best match with the reference spectrum is achieved by highly aromatic petroleum fractions. It is shown that the selected petroleum fractions used in the present study are able to match the band pattern of anthracite coal. Coal has been proposed previously as a model for the PPNe and UIBs but presents some drawbacks which could be overcome by adopting the petroleum fractions as model for PPNe and UIBs in place of coal. A brief discussion on the formation of the petroleum-like fractions in PPNe objects is included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardage, B.A.; Carr, D.L.; Finley, R.J.
1995-07-01
The objectives of this project are to define undrained or incompletely drained reservoir compartments controlled primarily by depositional heterogeneity in a low-accommodation, cratonic Midcontinent depositional setting, and, afterwards, to develop and transfer to producers strategies for infield reserve growth of natural gas. Integrated geologic, geophysical, reservoir engineering, and petrophysical evaluations are described in complex difficult-to-characterize fluvial and deltaic reservoirs in Boonsville (Bend Conglomerate Gas) field, a large, mature gas field located in the Fort Worth Basin of North Texas. The purpose of this project is to demonstrate approaches to overcoming the reservoir complexity, targeting the gas resource, and doing somore » using state-of-the-art technologies being applied by a large cross section of Midcontinent operators.« less
The double quasar 0957+561: a radio study at 6-centimeters wavelength.
Roberts, D H; Greenfield, P E; Burke, B F
1979-08-31
The optical double quasar 0957+561 has been interpreted as the gravitational double image of a single object. A radio map made with the Very Large Array of the National Radio Astronomy Observatory shows unresolved sources coincident With the optical images as well as a complex of related extended emission. Although the results cannot rule out the gravitational lens hypothesis, the complex radio structure is more easily interpreted as two separate quasars. The optical and radio properties of the two quasars are so similar that the two must have been formed at the same time with similar initial conditions.
A 100,000 Scale Factor Radar Range.
Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser
2017-12-19
The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.
STT Doubles with Large δM - Part VI: Cygnus Multiples
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2016-10-01
The results of visual double star observing sessions suggested a pattern for STT doubles with large delta_M of being harder to resolve than would be expected based on the WDS catalog data. It was felt this might be a problem with expectations on one hand, and on the other might be an indication of a need for new precise measurements, so we decided to take a closer look at a selected sample of STT doubles and do some research. Of these objects we found three rather complex multiples in Cygnus of special interest so we decided to write a separate report to have more room to include the non STT components as well. Again like for the other objects covered so far several of the components show parameters quite different from the current WDS data.
Development and Evaluation of a Pharmacogenomics Educational Program for Pharmacists
Formea, Christine M.; Nicholson, Wayne T.; McCullough, Kristen B.; Berg, Kevin D.; Berg, Melody L.; Cunningham, Julie L.; Merten, Julianna A.; Ou, Narith N.; Stollings, Joanna L.
2013-01-01
Objectives. To evaluate hospital and outpatient pharmacists’ pharmacogenomics knowledge before and 2 months after participating in a targeted, case-based pharmacogenomics continuing education program. Design. As part of a continuing education program accredited by the Accreditation Council for Pharmacy Education (ACPE), pharmacists were provided with a fundamental pharmacogenomics education program. Evaluation. An 11-question, multiple-choice, electronic survey instrument was distributed to 272 eligible pharmacists at a single campus of a large, academic healthcare system. Pharmacists improved their pharmacogenomics test scores by 0.7 questions (pretest average 46%; posttest average 53%, p=0.0003). Conclusions. Although pharmacists demonstrated improvement, overall retention of educational goals and objectives was marginal. These results suggest that the complex topic of pharmacogenomics requires a large educational effort in order to increase pharmacists’ knowledge and comfort level with this emerging therapeutic opportunity. PMID:23459098
Development and evaluation of a pharmacogenomics educational program for pharmacists.
Formea, Christine M; Nicholson, Wayne T; McCullough, Kristen B; Berg, Kevin D; Berg, Melody L; Cunningham, Julie L; Merten, Julianna A; Ou, Narith N; Stollings, Joanna L
2013-02-12
Objectives. To evaluate hospital and outpatient pharmacists' pharmacogenomics knowledge before and 2 months after participating in a targeted, case-based pharmacogenomics continuing education program.Design. As part of a continuing education program accredited by the Accreditation Council for Pharmacy Education (ACPE), pharmacists were provided with a fundamental pharmacogenomics education program.Evaluation. An 11-question, multiple-choice, electronic survey instrument was distributed to 272 eligible pharmacists at a single campus of a large, academic healthcare system. Pharmacists improved their pharmacogenomics test scores by 0.7 questions (pretest average 46%; posttest average 53%, p=0.0003).Conclusions. Although pharmacists demonstrated improvement, overall retention of educational goals and objectives was marginal. These results suggest that the complex topic of pharmacogenomics requires a large educational effort in order to increase pharmacists' knowledge and comfort level with this emerging therapeutic opportunity.
NASA Astrophysics Data System (ADS)
Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.
2017-02-01
Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.
NASA Astrophysics Data System (ADS)
Giri, Chaitanya; McKay, Christopher P.; Goesmann, Fred; Schäfer, Nadine; Li, Xiang; Steininger, Harald; Brinckerhoff, William B.; Gautier, Thomas; Reitner, Joachim; Meierhenrich, Uwe J.
2016-07-01
Astronomical observations of Centaurs and trans-Neptunian objects (TNOs) yield two characteristic features - near-infrared (NIR) reflectance and low geometric albedo. The first feature apparently originates due to complex organic material on their surfaces, but the origin of the material contributing to low albedo is not well understood. Titan tholins synthesized to simulate aerosols in the atmosphere of Saturn's moon Titan have also been used for simulating the NIR reflectances of several Centaurs and TNOs. Here, we report novel detections of large polycyclic aromatic hydrocarbons, nanoscopic soot aggregates and cauliflower-like graphite within Titan tholins. We put forth a proof of concept stating the surfaces of Centaurs and TNOs may perhaps comprise of highly `carbonized' complex organic material, analogous to the tholins we investigated. Such material would apparently be capable of contributing to the NIR reflectances and to the low geometric albedos simultaneously.
Systems Proteomics for Translational Network Medicine
Arrell, D. Kent; Terzic, Andre
2012-01-01
Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016
A Java application for tissue section image analysis.
Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C
2005-02-01
The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
Universal robotic gripper based on the jamming of granular material
Brown, Eric; Rodenberg, Nicholas; Amend, John; Mozeika, Annan; Steltz, Erik; Zakin, Mitchell R.; Lipson, Hod; Jaeger, Heinrich M.
2010-01-01
Gripping and holding of objects are key tasks for robotic manipulators. The development of universal grippers able to pick up unfamiliar objects of widely varying shape and surface properties remains, however, challenging. Most current designs are based on the multifingered hand, but this approach introduces hardware and software complexities. These include large numbers of controllable joints, the need for force sensing if objects are to be handled securely without crushing them, and the computational overhead to decide how much stress each finger should apply and where. Here we demonstrate a completely different approach to a universal gripper. Individual fingers are replaced by a single mass of granular material that, when pressed onto a target object, flows around it and conforms to its shape. Upon application of a vacuum the granular material contracts and hardens quickly to pinch and hold the object without requiring sensory feedback. We find that volume changes of less than 0.5% suffice to grip objects reliably and hold them with forces exceeding many times their weight. We show that the operating principle is the ability of granular materials to transition between an unjammed, deformable state and a jammed state with solid-like rigidity. We delineate three separate mechanisms, friction, suction, and interlocking, that contribute to the gripping force. Using a simple model we relate each of them to the mechanical strength of the jammed state. This advance opens up new possibilities for the design of simple, yet highly adaptive systems that excel at fast gripping of complex objects.
Deane-Coe, Kirsten K; Sarvary, Mark A; Owens, Thomas G
2017-01-01
In an undergraduate introductory biology laboratory course, we used a summative assessment to directly test the learning objective that students will be able to apply course material to increasingly novel and complex situations. Using a factorial framework, we developed multiple true-false questions to fall along axes of novelty and complexity, which resulted in four categories of questions: familiar content and low complexity (category A); novel content and low complexity (category B); familiar content and high complexity (category C); and novel content and high complexity (category D). On average, students scored more than 70% on all questions, indicating that the course largely met this learning objective. However, students scored highest on questions in category A, likely because they were most similar to course content, and lowest on questions in categories C and D. While we anticipated students would score equally on questions for which either novelty or complexity was altered (but not both), we observed that student scores in category C were lower than in category B. Furthermore, students performed equally poorly on all questions for which complexity was higher (categories C and D), even those containing familiar content, suggesting that application of course material to increasingly complex situations is particularly challenging to students. © 2017 K. K. Deane-Coe et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
NASA Astrophysics Data System (ADS)
Sycheva, Elena A.; Vasilev, Aleksandr S.; Lashmanov, Oleg U.; Korotaev, Valery V.
2017-06-01
The article is devoted to the optimization of optoelectronic systems of the spatial position of objects. Probabilistic characteristics of the detection of an active structured mark on a random noisy background are investigated. The developed computer model and the results of the study allow us to estimate the probabilistic characteristics of detection of a complex structured mark on a random gradient background, and estimate the error of spatial coordinates. The results of the study make it possible to improve the accuracy of measuring the coordinates of the object. Based on the research recommendations are given on the choice of parameters of the optimal mark structure for use in opticalelectronic systems for monitoring the spatial position of large-sized structures.
NASA Astrophysics Data System (ADS)
Selsam, Peter; Schwartze, Christian
2016-10-01
Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.
Improving CNN Performance Accuracies With Min-Max Objective.
Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning
2017-06-09
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.
Tri-track: free software for large-scale particle tracking.
Vallotton, Pascal; Olivier, Sandra
2013-04-01
The ability to correctly track objects in time-lapse sequences is important in many applications of microscopy. Individual object motions typically display a level of dynamic regularity reflecting the existence of an underlying physics or biology. Best results are obtained when this local information is exploited. Additionally, if the particle number is known to be approximately constant, a large number of tracking scenarios may be rejected on the basis that they are not compatible with a known maximum particle velocity. This represents information of a global nature, which should ideally be exploited too. Some time ago, we devised an efficient algorithm that exploited both types of information. The tracking task was reduced to a max-flow min-cost problem instance through a novel graph structure that comprised vertices representing objects from three consecutive image frames. The algorithm is explained here for the first time. A user-friendly implementation is provided, and the specific relaxation mechanism responsible for the method's effectiveness is uncovered. The software is particularly competitive for complex dynamics such as dense antiparallel flows, or in situations where object displacements are considerable. As an application, we characterize a remarkable vortex structure formed by bacteria engaged in interstitial motility.
Feng, Zhou-yan; Zheng, Xiao-xiang
2002-08-01
Objective. To study the complexity and the power spectrum of cortical EEG and hippocampal potential in rats under waking and sleep states. Method. Cortical EEG and hippocampal potential were collected by implanted electrodes in freely moving rats. Algorithmic complexity (Kc), approximate entropy (ApEn), power spectral density (PSD) and gravity frequency of PSD of the potential waves were calculated. Result. The complexity of hippocampal potential was higher than that of cortical EEG under every state. The complexity of cortical EEG was lowest under the state of non rapid eye movement (NREM) sleep. The complexity of hippocampal potential was highest under waking state. The total power of both potentials in 0.5- 30 Hz frequency band showed their highest values under NREM state. Conclusion. The values of Kc and ApEn are closely related to the distributions of PSD. When there are evident peaks in PSD, the complexities of signals will decrease. The complexities may be used to distinguish the difference between cortical EEG and hippocampal potential, or large differences between the same kind of potentials under different behavioral states.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Digital Library Storage using iRODS Data Grids
NASA Astrophysics Data System (ADS)
Hedges, Mark; Blanke, Tobias; Hasan, Adil
Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293
New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.
Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan
2012-01-01
Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.
Robust position estimation of a mobile vehicle
NASA Astrophysics Data System (ADS)
Conan, Vania; Boulanger, Pierre; Elgazzar, Shadia
1994-11-01
The ability to estimate the position of a mobile vehicle is a key task for navigation over large distances in complex indoor environments such as nuclear power plants. Schematics of the plants are available, but they are incomplete, as real settings contain many objects, such as pipes, cables or furniture, that mask part of the model. The position estimation method described in this paper matches 3-D data with a simple schematic of a plant. It is basically independent of odometry information and viewpoint, robust to noisy data and spurious points and largely insensitive to occlusions. The method is based on a hypothesis/verification paradigm and its complexity is polynomial; it runs in (Omicron) (m4n4), where m represents the number of model patches and n the number of scene patches. Heuristics are presented to speed up the algorithm. Results on real 3-D data show good behavior even when the scene is very occluded.
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Kumashiro, Mikihiko; Sakai, Masaki
2016-12-01
Three types of genital movement, their neural controls, and functional roles were investigated to gain a better understanding of the mechanism underlying autocleaning in the male cricket. The membrane complex consisting of the median pouch and genital chamber floor shows peculiar undulation that is composed of two types of movements: a right-left large shift and small crease-like movements. The large shift was caused by contraction of a pair of muscles (MPA) located anterior to the median pouch, while the crease-like movements were caused by numerous muscle fibers extending over the membrane complex. The MPA and muscle fibers were each innervated by efferent neurons in the terminal abdominal ganglion. Experiments with artificial dirt mimicking a foreign object revealed that the crease-like movements were responsible for dirt transport, while the large shift participated in sweeping the dirt into the lateral pouch as a trash container. On the other hand, the dorsal pouch serving as a template for the spermatophore showed a jerky bending movement. Simultaneous monitoring of the membrane complex and dorsal pouch activities suggested that their movements cooperate to enable the efficient evacuation of waste in the dorsal pouch. Based on the results, we conclude that genital autocleaning supports the production of the spermatophore.
The ODD protocol: A review and first update
Grimm, Volker; Berger, Uta; DeAngelis, Donald L.; Polhill, J. Gary; Giske, Jarl; Railsback, Steve F.
2010-01-01
The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.
An Analysis of the Second Project High Water Data
NASA Technical Reports Server (NTRS)
Woodbridge, David D.; Lasater, James A.; Fultz, Bennett M.; Clark, Richard E.; Wylie, Nancy
1963-01-01
Early in 1962 NASA established "Project High Water" to investigate the sudden release of large quantities of water into the upper atmosphere. The primary objectives of these experiments were to obtain information on the behavior of liquids released in the ionosphere and the localized effects on the ionosphere produced by the injection of large quantities of water. The data obtained in the two (2) Project High Water experiments have yielded an extensive amount of information concerning the complex phenomena associated with the sudden release of liquids in the Ionosphere. The detailed analysis of data obtained during the second Project High Water experiment (i.e., the third Saturn I vehicle test or SA-3) presented in this report demonstrates that the objectives of the Project High Water were achieved. In addition, the Project High Water has provided essential information relevant to a number of problems vital to manned explorations of space.
Control of fluxes in metabolic networks
Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu
2016-01-01
Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. PMID:27197218
NASA Astrophysics Data System (ADS)
Dinu, M. I.
2017-11-01
The article described the complexation of metal ions with humus substances in natural waters (small lakes). Humus substances as the major biochemical components of natural water have a significant impact on the forms and migration of metals and the toxicity of natural objects. This article presents the results of large-scale chemical experiments: the study of the structural features (zonal aspects) of humus substances extracted from soil and water natural climatic zones (more than 300 objects) in Russia (European Russia and West Siberia); the influence of structural features on the physic-chemical parameters of humus acids and, in particular, on their complexing ability. The functional specifics of humus matter extracted from soils is estimated using spectrometric techniques. The conditional stability constants for Fe(III), Cu(II), Pb(II), Cd(II), Zn(II), Ni(II), Co(II), Mn(II), Cr(III), Ca(II), Mg(II), Sr(II), and Al(III) are experimentally determined with the electrochemical, spectroscopic analysis methods. The activities of metals are classified according to their affinity to humus compounds in soils and water. The determined conditional stability constants of the complexes are tested by model experiments, and it is demonstrated that Fe and Al ions have higher conditional stability constants than the ions of alkali earth metals, Pb, Cu, and Zn. Furthermore, the influence of aluminium ions and iron on the complexation of copper and lead as well as the influence of lead and copper on complexation of cobalt and nickel have been identified. The metal forms in a large number of lakes are calculated basing on the experiments’ results. The main chemical mechanisms of the distribution of metals by forms in the water of the lakes in European Russia and West Siberia are described.
Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex
NASA Technical Reports Server (NTRS)
Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)
2002-01-01
The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.
Group Decision Support System to Aid the Process of Design and Maintenance of Large Scale Systems
1992-03-23
from a fuzzy set of user requirements. The overall objective of the project is to develop a system combining the characteristics of a compact computer... AHP ) for hierarchical prioritization. 4) Individual Evaluation and Selection of Alternatives - Allows the decision maker to individually evaluate...its concept of outranking relations. The AHP method supports complex decision problems by successively decomposing and synthesizing various elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
Marin, Manuela M.; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne’s collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general. PMID:23977295
Marin, Manuela M; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general.
Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J
2016-01-06
In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.
Fabry-Perot confocal resonator optical associative memory
NASA Astrophysics Data System (ADS)
Burns, Thomas J.; Rogers, Steven K.; Vogel, George A.
1993-03-01
A unique optical associative memory architecture is presented that combines the optical processing environment of a Fabry-Perot confocal resonator with the dynamic storage and recall properties of volume holograms. The confocal resonator reduces the size and complexity of previous associative memory architectures by folding a large number of discrete optical components into an integrated, compact optical processing environment. Experimental results demonstrate the system is capable of recalling a complete object from memory when presented with partial information about the object. A Fourier optics model of the system's operation shows it implements a spatially continuous version of a discrete, binary Hopfield neural network associative memory.
Slow feature analysis: unsupervised learning of invariances.
Wiskott, Laurenz; Sejnowski, Terrence J
2002-04-01
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Classifier-Guided Sampling for Complex Energy System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less
Prevention of data duplication for high throughput sequencing repositories
Gabdank, Idan; Chan, Esther T; Davidson, Jean M; Hilton, Jason A; Davis, Carrie A; Baymuradov, Ulugbek K; Narayanan, Aditi; Onate, Kathrina C; Graham, Keenan; Miyasato, Stuart R; Dreszer, Timothy R; Strattan, J Seth; Jolanki, Otto; Tanaka, Forrest Y; Hitz, Benjamin C
2018-01-01
Abstract Prevention of unintended duplication is one of the ongoing challenges many databases have to address. Working with high-throughput sequencing data, the complexity of that challenge increases with the complexity of the definition of a duplicate. In a computational data model, a data object represents a real entity like a reagent or a biosample. This representation is similar to how a card represents a book in a paper library catalog. Duplicated data objects not only waste storage, they can mislead users into assuming the model represents more than the single entity. Even if it is clear that two objects represent a single entity, data duplication opens the door to potential inconsistencies between the objects since the content of the duplicated objects can be updated independently, allowing divergence of the metadata associated with the objects. Analogously to a situation in which a catalog in a paper library would contain by mistake two cards for a single copy of a book. If these cards are listing simultaneously two different individuals as current book borrowers, it would be difficult to determine which borrower (out of the two listed) actually has the book. Unfortunately, in a large database with multiple submitters, unintended duplication is to be expected. In this article, we present three principal guidelines the Encyclopedia of DNA Elements (ENCODE) Portal follows in order to prevent unintended duplication of both actual files and data objects: definition of identifiable data objects (I), object uniqueness validation (II) and de-duplication mechanism (III). In addition to explaining our modus operandi, we elaborate on the methods used for identification of sequencing data files. Comparison of the approach taken by the ENCODE Portal vs other widely used biological data repositories is provided. Database URL: https://www.encodeproject.org/ PMID:29688363
Aggregated Indexing of Biomedical Time Series Data
Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.
2016-01-01
Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298
NASA Astrophysics Data System (ADS)
Steinberg, Marc
2011-06-01
This paper presents a selective survey of theoretical and experimental progress in the development of biologicallyinspired approaches for complex surveillance and reconnaissance problems with multiple, heterogeneous autonomous systems. The focus is on approaches that may address ISR problems that can quickly become mathematically intractable or otherwise impractical to implement using traditional optimization techniques as the size and complexity of the problem is increased. These problems require dealing with complex spatiotemporal objectives and constraints at a variety of levels from motion planning to task allocation. There is also a need to ensure solutions are reliable and robust to uncertainty and communications limitations. First, the paper will provide a short introduction to the current state of relevant biological research as relates to collective animal behavior. Second, the paper will describe research on largely decentralized, reactive, or swarm approaches that have been inspired by biological phenomena such as schools of fish, flocks of birds, ant colonies, and insect swarms. Next, the paper will discuss approaches towards more complex organizational and cooperative mechanisms in team and coalition behaviors in order to provide mission coverage of large, complex areas. Relevant team behavior may be derived from recent advances in understanding of the social and cooperative behaviors used for collaboration by tens of animals with higher-level cognitive abilities such as mammals and birds. Finally, the paper will briefly discuss challenges involved in user interaction with these types of systems.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.
Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz A.
1985-01-01
A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.
Integrating science into management of ecosystems in the Greater Blue Mountains.
Chapple, Rosalie S; Ramp, Daniel; Bradstock, Ross A; Kingsford, Richard T; Merson, John A; Auld, Tony D; Fleming, Peter J S; Mulley, Robert C
2011-10-01
Effective management of large protected conservation areas is challenged by political, institutional and environmental complexity and inconsistency. Knowledge generation and its uptake into management are crucial to address these challenges. We reflect on practice at the interface between science and management of the Greater Blue Mountains World Heritage Area (GBMWHA), which covers approximately 1 million hectares west of Sydney, Australia. Multiple government agencies and other stakeholders are involved in its management, and decision-making is confounded by numerous plans of management and competing values and goals, reflecting the different objectives and responsibilities of stakeholders. To highlight the complexities of the decision-making process for this large area, we draw on the outcomes of a recent collaborative research project and focus on fire regimes and wild-dog control as examples of how existing knowledge is integrated into management. The collaborative research project achieved the objectives of collating and synthesizing biological data for the region; however, transfer of the project's outcomes to management has proved problematic. Reasons attributed to this include lack of clearly defined management objectives to guide research directions and uptake, and scientific information not being made more understandable and accessible. A key role of a local bridging organisation (e.g., the Blue Mountains World Heritage Institute) in linking science and management is ensuring that research results with management significance can be effectively transmitted to agencies and that outcomes are explained for nonspecialists as well as more widely distributed. We conclude that improved links between science, policy, and management within an adaptive learning-by-doing framework for the GBMWHA would assist the usefulness and uptake of future research.
Integrating Science into Management of Ecosystems in the Greater Blue Mountains
NASA Astrophysics Data System (ADS)
Chapple, Rosalie S.; Ramp, Daniel; Bradstock, Ross A.; Kingsford, Richard T.; Merson, John A.; Auld, Tony D.; Fleming, Peter J. S.; Mulley, Robert C.
2011-10-01
Effective management of large protected conservation areas is challenged by political, institutional and environmental complexity and inconsistency. Knowledge generation and its uptake into management are crucial to address these challenges. We reflect on practice at the interface between science and management of the Greater Blue Mountains World Heritage Area (GBMWHA), which covers approximately 1 million hectares west of Sydney, Australia. Multiple government agencies and other stakeholders are involved in its management, and decision-making is confounded by numerous plans of management and competing values and goals, reflecting the different objectives and responsibilities of stakeholders. To highlight the complexities of the decision-making process for this large area, we draw on the outcomes of a recent collaborative research project and focus on fire regimes and wild-dog control as examples of how existing knowledge is integrated into management. The collaborative research project achieved the objectives of collating and synthesizing biological data for the region; however, transfer of the project's outcomes to management has proved problematic. Reasons attributed to this include lack of clearly defined management objectives to guide research directions and uptake, and scientific information not being made more understandable and accessible. A key role of a local bridging organisation (e.g., the Blue Mountains World Heritage Institute) in linking science and management is ensuring that research results with management significance can be effectively transmitted to agencies and that outcomes are explained for nonspecialists as well as more widely distributed. We conclude that improved links between science, policy, and management within an adaptive learning-by-doing framework for the GBMWHA would assist the usefulness and uptake of future research.
Newborn chickens generate invariant object representations at the onset of visual object experience
Wood, Justin N.
2013-01-01
To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optomechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optormechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by, the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
The IRMIS object model and services API.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, C.; Dohan, D. A.; Arnold, N. D.
2005-01-01
The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is wellmore » underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not generate EPICS control system configurations from IRMIS, and hence do not impose any additional requirements on EPICS developers.« less
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
Three-dimensional nanoscale imaging by plasmonic Brownian microscopy
NASA Astrophysics Data System (ADS)
Labno, Anna; Gladden, Christopher; Kim, Jeongmin; Lu, Dylan; Yin, Xiaobo; Wang, Yuan; Liu, Zhaowei; Zhang, Xiang
2017-12-01
Three-dimensional (3D) imaging at the nanoscale is a key to understanding of nanomaterials and complex systems. While scanning probe microscopy (SPM) has been the workhorse of nanoscale metrology, its slow scanning speed by a single probe tip can limit the application of SPM to wide-field imaging of 3D complex nanostructures. Both electron microscopy and optical tomography allow 3D imaging, but are limited to the use in vacuum environment due to electron scattering and to optical resolution in micron scales, respectively. Here we demonstrate plasmonic Brownian microscopy (PBM) as a way to improve the imaging speed of SPM. Unlike photonic force microscopy where a single trapped particle is used for a serial scanning, PBM utilizes a massive number of plasmonic nanoparticles (NPs) under Brownian diffusion in solution to scan in parallel around the unlabeled sample object. The motion of NPs under an evanescent field is three-dimensionally localized to reconstruct the super-resolution topology of 3D dielectric objects. Our method allows high throughput imaging of complex 3D structures over a large field of view, even with internal structures such as cavities that cannot be accessed by conventional mechanical tips in SPM.
Reflecting on explanatory ability: A mechanism for detecting gaps in causal knowledge.
Johnson, Dan R; Murphy, Meredith P; Messer, Riley M
2016-05-01
People frequently overestimate their understanding-with a particularly large blind-spot for gaps in their causal knowledge. We introduce a metacognitive approach to reducing overestimation, termed reflecting on explanatory ability (REA), which is briefly thinking about how well one could explain something in a mechanistic, step-by-step, causally connected manner. Nine experiments demonstrated that engaging in REA just before estimating one's understanding substantially reduced overestimation. Moreover, REA reduced overestimation with nearly the same potency as generating full explanations, but did so 20 times faster (although only for high complexity objects). REA substantially reduced overestimation by inducing participants to quickly evaluate an object's inherent causal complexity (Experiments 4-7). REA reduced overestimation by also fostering step-by-step, causally connected processing (Experiments 2 and 3). Alternative explanations for REA's effects were ruled out including a general conservatism account (Experiments 4 and 5) and a covert explanation account (Experiment 8). REA's overestimation-reduction effect generalized beyond objects (Experiments 1-8) to sociopolitical policies (Experiment 9). REA efficiently detects gaps in our causal knowledge with implications for improving self-directed learning, enhancing self-insight into vocational and academic abilities, and even reducing extremist attitudes. (c) 2016 APA, all rights reserved).
Quantifying quality in DNA self-assembly
Wagenbauer, Klaus F.; Wachauf, Christian H.; Dietz, Hendrik
2014-01-01
Molecular self-assembly with DNA is an attractive route for building nanoscale devices. The development of sophisticated and precise objects with this technique requires detailed experimental feedback on the structure and composition of assembled objects. Here we report a sensitive assay for the quality of assembly. The method relies on measuring the content of unpaired DNA bases in self-assembled DNA objects using a fluorescent de-Bruijn probe for three-base ‘codons’, which enables a comparison with the designed content of unpaired DNA. We use the assay to measure the quality of assembly of several multilayer DNA origami objects and illustrate the use of the assay for the rational refinement of assembly protocols. Our data suggests that large and complex objects like multilayer DNA origami can be made with high strand integration quality up to 99%. Beyond DNA nanotechnology, we speculate that the ability to discriminate unpaired from paired nucleic acids in the same macromolecule may also be useful for analysing cellular nucleic acids. PMID:24751596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye
Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less
Identifying online user reputation of user-object bipartite networks
NASA Astrophysics Data System (ADS)
Liu, Xiao-Lu; Liu, Jian-Guo; Yang, Kai; Guo, Qiang; Han, Jing-Ti
2017-02-01
Identifying online user reputation based on the rating information of the user-object bipartite networks is important for understanding online user collective behaviors. Based on the Bayesian analysis, we present a parameter-free algorithm for ranking online user reputation, where the user reputation is calculated based on the probability that their ratings are consistent with the main part of all user opinions. The experimental results show that the AUC values of the presented algorithm could reach 0.8929 and 0.8483 for the MovieLens and Netflix data sets, respectively, which is better than the results generated by the CR and IARR methods. Furthermore, the experimental results for different user groups indicate that the presented algorithm outperforms the iterative ranking methods in both ranking accuracy and computation complexity. Moreover, the results for the synthetic networks show that the computation complexity of the presented algorithm is a linear function of the network size, which suggests that the presented algorithm is very effective and efficient for the large scale dynamic online systems.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.
2012-01-01
The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.
NASA Astrophysics Data System (ADS)
Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris
2015-04-01
Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H
2012-11-06
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.
Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data
Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.
2013-01-01
Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719
Visual short-term memory capacity for simple and complex objects.
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-03-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not related to storage limitations of VSTM, per se. We used ERPs to track neuronal activity specifically related to retention in VSTM by measuring the sustained posterior contralateral negativity during a change detection task (which required detecting if an item was changed between a memory and a test array). The sustained posterior contralateral negativity, during the retention interval, was larger for complex objects than for simple objects, suggesting that neurons mediating VSTM needed to work harder to maintain more complex objects. This, in turn, is consistent with the view that VSTM capacity depends on complexity.
Local laser-strengthening: Customizing the forming behavior of car body steel sheets
NASA Astrophysics Data System (ADS)
Wagner, M.; Jahn, A.; Beyer, E.; Balzani, D.
2018-05-01
Future trends in designing lightweight components especially for automotive applications increasingly require complex and delicate structures with highest possible level of capacity [1]. The manufacturing of metallic car body components is primarily realized by deep or stretch drawing. The forming process of especially cold rolled and large-sized components is typically characterized by inhomogeneous stress and strain distributions. As a result, the avoidance of undesirable deep drawing effects like earing and local necking is among the greatest challenges in forming complex car body structures [2]. Hence, a novel local laser-treatment approach with the objective of customizing the forming behavior of car body steel sheets is currently explored.
Utilizing OODB schema modeling for vocabulary management.
Gu, H.; Cimino, J. J.; Halper, M.; Geller, J.; Perl, Y.
1996-01-01
Comprehension of complex controlled vocabularies is often difficult. We present a method, facilitated by an object-oriented database, for depicting such a vocabulary (the Medical Entities Dictionary (MED) from the Columbia-Presbyterian Medical Center) in a schematic way which uses a sparse inheritance network of area classes. The resulting Object Oriented Health Vocabulary repository (OOHVR) allows visualization of the 43,000 MED concepts as 90 area classes. This view has provided valuable information to those responsible with maintaining the MED. As a result, the MED organization has been improved and some previously-unrecognized errors and inconsistencies have been removed. We believe that this schematic approach allows improved comprehension of the gestalt of large controlled medical vocabulary. PMID:8947671
Laser-assisted guiding of electric discharges around objects
Clerici, Matteo; Hu, Yi; Lassonde, Philippe; Milián, Carles; Couairon, Arnaud; Christodoulides, Demetrios N.; Chen, Zhigang; Razzari, Luca; Vidal, François; Légaré, François; Faccio, Daniele; Morandotti, Roberto
2015-01-01
Electric breakdown in air occurs for electric fields exceeding 34 kV/cm and results in a large current surge that propagates along unpredictable trajectories. Guiding such currents across specific paths in a controllable manner could allow protection against lightning strikes and high-voltage capacitor discharges. Such capabilities can be used for delivering charge to specific targets, for electronic jamming, or for applications associated with electric welding and machining. We show that judiciously shaped laser radiation can be effectively used to manipulate the discharge along a complex path and to produce electric discharges that unfold along a predefined trajectory. Remarkably, such laser-induced arcing can even circumvent an object that completely occludes the line of sight. PMID:26601188
Object-oriented fault tree evaluation program for quantitative analyses
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1988-01-01
Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.
Nie, Yan; Viola, Cristina; Bieniossek, Christoph; Trowitzsch, Simon; Vijay-achandran, Lakshmi Sumitra; Chaillet, Maxime; Garzoni, Frederic; Berger, Imre
2009-01-01
We are witnessing tremendous advances in our understanding of the organization of life. Complete genomes are being deciphered with ever increasing speed and accuracy, thereby setting the stage for addressing the entire gene product repertoire of cells, towards understanding whole biological systems. Advances in bioinformatics and mass spectrometric techniques have revealed the multitude of interactions present in the proteome. Multiprotein complexes are emerging as a paramount cornerstone of biological activity, as many proteins appear to participate, stably or transiently, in large multisubunit assemblies. Analysis of the architecture of these assemblies and their manifold interactions is imperative for understanding their function at the molecular level. Structural genomics efforts have fostered the development of many technologies towards achieving the throughput required for studying system-wide single proteins and small interaction motifs at high resolution. The present shift in focus towards large multiprotein complexes, in particular in eukaryotes, now calls for a likewise concerted effort to develop and provide new technologies that are urgently required to produce in quality and quantity the plethora of multiprotein assemblies that form the complexome, and to routinely study their structure and function at the molecular level. Current efforts towards this objective are summarized and reviewed in this contribution. PMID:20514218
Comparison of sine dwell and broadband methods for modal testing
NASA Technical Reports Server (NTRS)
Chen, Jay-Chung
1989-01-01
The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.
1990-05-01
of static and dynamic resource allocation . * Develop a wide-spectrum requirements engineering language that meets the objectives defined in this...within the next few years. The TrCP Panel will closely monitor future developments in this area, and will fully consider this suggestion. Chairman...experience has shown that, especially for large and complex system developments , it is rare that the true needs of all stakeholders are fully stated
The Complexity of Quantitative Concurrent Parity Games
2004-11-01
for each player. In this paper we study only zero-sum games [20, 11], where the objectives of the two players are strictly competitive . In other words...Aided Verification, volume 1102 of LNCS, pages 75–86. Springer, 1996. [14] R.J. Lipton, E . Markakis, and A. Mehta. Playing large games using simple...strategies. In EC 03: Electronic Commerce, pages 36–41. ACM Press, 2003. 28 [15] D.A. Martin. The determinacy of Blackwell games . The Journal of Symbolic
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Astrophysics Data System (ADS)
Masson, Andre; Schulte In den Baeumen, J.; Zuegge, Hannfried
1989-04-01
Recent advances in the design of large optical components are discussed in reviews and reports. Sections are devoted to calculation and optimization methods, optical-design software, IR optics, diagnosis and tolerancing, image formation, lens design, and large optics. Particular attention is given to the use of the pseudoeikonal in optimization, design with nonsequential ray tracing, aspherics and color-correcting elements in the thermal IR, on-line interferometric mirror-deforming measurement with an Ar-ion laser, and the effect of ametropia on laser-interferometric visual acuity. Also discussed are a holographic head-up display for air and ground applications, high-performance objectives for a digital CCD telecine, the optics of the ESO Very Large Telescope, static wavefront correction by Linnik interferometry, and memory-saving techniques in damped least-squares optimization of complex systems.
Large-eddy simulation of a boundary layer with concave streamwise curvature
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1994-01-01
Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.
Control of fluxes in metabolic networks.
Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu
2016-07-01
Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. © 2016 Basler et al.; Published by Cold Spring Harbor Laboratory Press.
Omoto, Jaison Jiro; Keleş, Mehmet Fatih; Nguyen, Bao-Chau Minh; Bolanos, Cheyenne; Lovick, Jennifer Kelly; Frye, Mark Arthur; Hartenstein, Volker
2017-04-24
The Drosophila central brain consists of stereotyped neural lineages, developmental-structural units of macrocircuitry formed by the sibling neurons of single progenitors called neuroblasts. We demonstrate that the lineage principle guides the connectivity and function of neurons, providing input to the central complex, a collection of neuropil compartments important for visually guided behaviors. One of these compartments is the ellipsoid body (EB), a structure formed largely by the axons of ring (R) neurons, all of which are generated by a single lineage, DALv2. Two further lineages, DALcl1 and DALcl2, produce neurons that connect the anterior optic tubercle, a central brain visual center, with R neurons. Finally, DALcl1/2 receive input from visual projection neurons of the optic lobe medulla, completing a three-legged circuit that we call the anterior visual pathway (AVP). The AVP bears a fundamental resemblance to the sky-compass pathway, a visual navigation circuit described in other insects. Neuroanatomical analysis and two-photon calcium imaging demonstrate that DALcl1 and DALcl2 form two parallel channels, establishing connections with R neurons located in the peripheral and central domains of the EB, respectively. Although neurons of both lineages preferentially respond to bright objects, DALcl1 neurons have small ipsilateral, retinotopically ordered receptive fields, whereas DALcl2 neurons share a large excitatory receptive field in the contralateral hemifield. DALcl2 neurons become inhibited when the object enters the ipsilateral hemifield and display an additional excitation after the object leaves the field of view. Thus, the spatial position of a bright feature, such as a celestial body, may be encoded within this pathway. Copyright © 2017 Elsevier Ltd. All rights reserved.
Master-slave system with force feedback based on dynamics of virtual model
NASA Technical Reports Server (NTRS)
Nojima, Shuji; Hashimoto, Hideki
1994-01-01
A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.
The cosmic ray muon tomography facility based on large scale MRPC detectors
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping
2015-06-01
Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen
2015-11-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
2014-07-31
Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less
Reengineering legacy software to object-oriented systems
NASA Technical Reports Server (NTRS)
Pitman, C.; Braley, D.; Fridge, E.; Plumb, A.; Izygon, M.; Mears, B.
1994-01-01
NASA has a legacy of complex software systems that are becoming increasingly expensive to maintain. Reengineering is one approach to modemizing these systems. Object-oriented technology, other modem software engineering principles, and automated tools can be used to reengineer the systems and will help to keep maintenance costs of the modemized systems down. The Software Technology Branch at the NASA/Johnson Space Center has been developing and testing reengineering methods and tools for several years. The Software Technology Branch is currently providing training and consulting support to several large reengineering projects at JSC, including the Reusable Objects Software Environment (ROSE) project, which is reengineering the flight analysis and design system (over 2 million lines of FORTRAN code) into object-oriented C++. Many important lessons have been learned during the past years; one of these is that the design must never be allowed to diverge from the code during maintenance and enhancement. Future work on open, integrated environments to support reengineering is being actively planned.
Structure, thermodynamics, and solubility in tetromino fluids.
Barnes, Brian C; Siderius, Daniel W; Gelb, Lev D
2009-06-16
To better understand the self-assembly of small molecules and nanoparticles adsorbed at interfaces, we have performed extensive Monte Carlo simulations of a simple lattice model based on the seven hard "tetrominoes", connected shapes that occupy four lattice sites. The equations of state of the pure fluids and all of the binary mixtures are determined over a wide range of density, and a large selection of multicomponent mixtures are also studied at selected conditions. Calculations are performed in the grand canonical ensemble and are analogous to real systems in which molecules or nanoparticles reversibly adsorb to a surface or interface from a bulk reservoir. The model studied is athermal; objects in these simulations avoid overlap but otherwise do not interact. As a result, all of the behavior observed is entropically driven. The one-component fluids all exhibit marked self-ordering tendencies at higher densities, with quite complex structures formed in some cases. Significant clustering of objects with the same rotational state (orientation) is also observed in some of the pure fluids. In all of the binary mixtures, the two species are fully miscible at large scales, but exhibit strong species-specific clustering (segregation) at small scales. This behavior persists in multicomponent mixtures; even in seven-component mixtures of all the shapes there is significant association between objects of the same shape. To better understand these phenomena, we calculate the second virial coefficients of the tetrominoes and related quantities, extract thermodynamic volume of mixing data from the simulations of binary mixtures, and determine Henry's law solubilities for each shape in a variety of solvents. The overall picture obtained is one in which complementarity of both the shapes of individual objects and the characteristic structures of different fluids are important in determining the overall behavior of a fluid of a given composition, with sometimes counterintuitive results. Finally, we note that no sharp phase transitions are observed but that this appears to be due to the small size of the objects considered. It is likely that complex phase behavior may be found in systems of larger polyominoes.
Building a global business continuity programme.
Lazcano, Michael
2014-01-01
Business continuity programmes provide an important function within organisations, especially when aligned with and supportive of the organisation's goals, objectives and organisational culture. Continuity programmes for large, complex international organisations, unlike those for compact national companies, are more difficult to design, build, implement and maintain. Programmes for international organisations require attention to structural design, support across organisational leadership and hierarchy, seamless integration with the organisation's culture, measured success and demonstrated value. This paper details practical, but sometimes overlooked considerations for building successful global business continuity programmes.
1990-05-01
static and dynamic resource allocation . " Develop a wide-spectrum requirements engineering language that meets the objectives defined in this section...workshop within the next few years. The TTCP Panel will closely monitor future developments in this area, and will fully consider this suggestion. seph C...for large and complex system developments , it is rare that the true needs of all stakeholders are fully stated and understood from the outset
Science information systems: Archive, access, and retrieval
NASA Technical Reports Server (NTRS)
Campbell, William J.
1991-01-01
The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.
An Analysis of the Impact of Multi-Year Procurement on Weapon System Acquisition
1981-09-01
contractor~ s accounting system. The cost must be identi- fiable to all cost objectives and allocated to each based on consistent and equitable...ity of the U. S . Defense industry to react in times of crisis. Growing evidence indicates that thsse cost and efficiency problems have been caused, in...politics [84t4]." Increased complexity and expanded technology of today’s defense systems accounts for a large portion of this cost growth (14:4; 85s26
Conservation of design knowledge. [of large complex spaceborne systems
NASA Technical Reports Server (NTRS)
Sivard, Cecilia; Zweben, Monte; Cannon, David; Lakin, Fred; Leifer, Larry
1989-01-01
This paper presents an approach for acquiring knowledge about a design during the design process. The objective is to increase the efficiency of the lifecycle management of a space-borne system by providing operational models of the system's structure and behavior, as well as the design rationale, to human and automated operators. A design knowledge acquisition system is under development that compares how two alternative design versions meet the system requirements as a means for automatically capturing rationale for design changes.
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
NASA Astrophysics Data System (ADS)
Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.
2016-11-01
Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.
Object-oriented Approach to High-level Network Monitoring and Management
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2000-01-01
An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.
Clinical simulation as a boundary object in design of health IT-systems.
Rasmussen, Stine Loft; Jensen, Sanne; Lyng, Karen Marie
2013-01-01
Healthcare organizations are very complex, holding numerous stakeholders with various approaches and goals towards the design of health IT-systems. Some of these differences may be approached by applying the concept of boundary objects in a participatory IT-design process. Traditionally clinical simulation provides the opportunity to evaluate the design and the usage of clinical IT-systems without endangering the patients and interrupting clinical work. In this paper we present how clinical simulation additionally holds the potential to function as a boundary object in the design process. The case points out that clinical simulation provides an opportunity for discussions and mutual learning among the various stakeholders involved in design of standardized electronic clinical documentation templates. The paper presents and discusses the use of clinical simulation in the translation, transfer and transformation of knowledge between various stakeholders in a large healthcare organization.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Expert reasoning within an object-oriented framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohn, S.J.; Pennock, K.A.
1991-10-01
A large number of contaminated waste sites across the United States await site remediation efforts. These sites can be physically complex, composed of multiple, possibly interacting, contaminants distributed throughout one or more media. The Remedial Action Assessment System (RAAS) is being designed and developed to support decisions concerning the selection of remediation alternatives. The goal of this system is to broaden the consideration of remediation alternatives, while reducing the time and cost of making these considerations. The Remedial Action Assessment System was designed and constructed using object-oriented techniques. It is a hybrid system which uses a combination of quantitative andmore » qualitative reasoning to consider and suggest remediation alternatives. the reasoning process that drives this application is centered around an object-oriented organization of remediation technology information. This paper briefly describes the waste remediation problem and then discusses the information structure and organization RAAS utilizes to address it. 4 refs., 4 figs.« less
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Gray, A.
2014-04-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
Scalable Machine Learning for Massive Astronomical Datasets
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Astronomy Data Centre, Canadian
2014-01-01
We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
The evolution of hydrocarbons past the asymptotic giant branch: the case of MSX SMC 029
NASA Astrophysics Data System (ADS)
Pauly, Tyler; Sloan, Gregory C.; Kraemer, Kathleen E.; Bernard-Salas, Jeronimo; Lebouteiller, Vianney; Goes, Christopher; Barry, Donald
2015-01-01
We present an optimally extracted high-resolution spectrum of MSX SMC 029 obtained by the Infrared Spectrograph on the Spitzer Space Telescope. MSX SMC 029 is a carbon-rich object in the Small Magellanic Cloud that has evolved past the asymptotic giant branch (AGB). The spectrum reveals a cool carbon-rich dust continuum with emission from polycyclic aromatic hydrocarbons (PAHs) and absorption from simpler hydrocarbons, both aliphatic and aromatic, including acetylene and benzene. The spectrum shows many similarities to the carbon-rich post-AGB objects SMP LMC 011 in the Large Magellanic Cloud and AFGL 618 in the Galaxy. Both of these objects also show infrared absorption features from simple hydrocarbons. All three spectra lack strong atomic emission lines in the infrared, indicating that we are observing the evolution of carbon-rich dust and free hydrocarbons in objects between the AGB and planetary nebulae. These three objects give us a unique view of the elusive phase when hydrocarbons exist both as relatively simple molecules and the much more complex and ubiquitous PAHs. We may be witnessing the assembly of amorphous carbon into PAHs.
Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory
2016-01-01
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality. PMID:26779002
Marangon, Mattia; Kubiak, Agnieszka; Króliczak, Gregory
2015-01-01
The neural bases of haptically-guided grasp planning and execution are largely unknown, especially for stimuli having no visual representations. Therefore, we used functional magnetic resonance imaging (fMRI) to monitor brain activity during haptic exploration of novel 3D complex objects, subsequent grasp planning, and the execution of the pre-planned grasps. Haptic object exploration, involving extraction of shape, orientation, and length of the to-be-grasped targets, was associated with the fronto-parietal, temporo-occipital, and insular cortex activity. Yet, only the anterior divisions of the posterior parietal cortex (PPC) of the right hemisphere were significantly more engaged in exploration of complex objects (vs. simple control disks). None of these regions were re-recruited during the planning phase. Even more surprisingly, the left-hemisphere intraparietal, temporal, and occipital areas that were significantly invoked for grasp planning did not show sensitivity to object features. Finally, grasp execution, involving the re-recruitment of the critical right-hemisphere PPC clusters, was also significantly associated with two kinds of bilateral parieto-frontal processes. The first represents transformations of grasp-relevant target features and is linked to the dorso-dorsal (lateral and medial) parieto-frontal networks. The second monitors grasp kinematics and belongs to the ventro-dorsal networks. Indeed, signal modulations associated with these distinct functions follow dorso-ventral gradients, with left aIPS showing significant sensitivity to both target features and the characteristics of the required grasp. Thus, our results from the haptic domain are consistent with the notion that the parietal processing for action guidance reflects primarily transformations from object-related to effector-related coding, and these mechanisms are rather independent of sensory input modality.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
Development of Three-Dimensional Completion of Complex Objects
ERIC Educational Resources Information Center
Soska, Kasey C.; Johnson, Scott P.
2013-01-01
Three-dimensional (3D) object completion, the ability to perceive the backs of objects seen from a single viewpoint, emerges at around 6 months of age. Yet, only relatively simple 3D objects have been used in assessing its development. This study examined infants' 3D object completion when presented with more complex stimuli. Infants…
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
ERIC Educational Resources Information Center
Brady, Timothy F.; Alvarez, George A.
2015-01-01
A central question for models of visual working memory is whether the number of objects people can remember depends on object complexity. Some influential "slot" models of working memory capacity suggest that people always represent 3-4 objects and that only the fidelity with which these objects are represented is affected by object…
A geometric method for nipple localization
Khan, Humayun Ayub; Bayat, Ardeshir
2008-01-01
BACKGROUND: An important part of preoperative assessment in breast reduction surgery is to locate the site of the nipple-areola complex for the newly structured breast. Inappropriate location is difficult to correct secondarily. Traditional methods of nipple localization taught and practiced suggest the nipple to be located anterior to the inframammary fold. Trying to project this point on the anterior surface of the breast requires either large calipers or feeling the posteriorly placed finger on the anterior surface of a large breast. This certainly introduces some subjectivity to the calculation. OBJECTIVES: To introduce an easy and accurate method of nipple localization to reduce the learning curve for trainee surgeons. METHODS: Aesthetic placement of the nipples is at the lower angles of an equilateral or a short isosceles triangle on the chest with its apex at the sternal angle. This triangle can be thought of as two right-angled triangles with their Y-axis on the median plane. The base and vertical limb are measured, and the hypotenuse is calculated. The location of the lower angle is marked on the anterior surface of the breast and represents the new position of the nipple. RESULTS: Forty patients had nipple localization performed in the above-described manner, with satisfactory placement of the nipple-areola complex. CONCLUSIONS: The above technique introduces some objective measurements to the localization of the nipple in breast reduction surgery. It is easy to practice, and infuses confidence in trainees marking their initial breast reductions. PMID:19554165
NASA Astrophysics Data System (ADS)
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
Poston, Lucilla; Briley, Annette L; Barr, Suzanne; Bell, Ruth; Croker, Helen; Coxon, Kirstie; Essex, Holly N; Hunt, Claire; Hayes, Louise; Howard, Louise M; Khazaezadeh, Nina; Kinnunen, Tarja; Nelson, Scott M; Oteng-Ntim, Eugene; Robson, Stephen C; Sattar, Naveed; Seed, Paul T; Wardle, Jane; Sanders, Thomas A B; Sandall, Jane
2013-07-15
Complex interventions in obese pregnant women should be theoretically based, feasible and shown to demonstrate anticipated behavioural change prior to inception of large randomised controlled trials (RCTs). The aim was to determine if a) a complex intervention in obese pregnant women leads to anticipated changes in diet and physical activity behaviours, and b) to refine the intervention protocol through process evaluation of intervention fidelity. We undertook a pilot RCT of a complex intervention in obese pregnant women, comparing routine antenatal care with an intervention to reduce dietary glycaemic load and saturated fat intake, and increase physical activity. Subjects included 183 obese pregnant women (mean BMI 36.3 kg/m2). Compared to women in the control arm, women in the intervention arm had a significant reduction in dietary glycaemic load (33 points, 95% CI -47 to -20), (p < 0.001) and saturated fat intake (-1.6% energy, 95% CI -2.8 to -0. 3) at 28 weeks' gestation. Objectively measured physical activity did not change. Physical discomfort and sustained barriers to physical activity were common at 28 weeks' gestation. Process evaluation identified barriers to recruitment, group attendance and compliance, leading to modification of intervention delivery. This pilot trial of a complex intervention in obese pregnant women suggests greater potential for change in dietary intake than for change in physical activity, and through process evaluation illustrates the considerable advantage of performing an exploratory trial of a complex intervention in obese pregnant women before undertaking a large RCT. ISRCTN89971375.
The challenges associated with developing science-based landscape scale management plans
Szaro, Robert C.; Boyce, D.A.; Puchlerz, T.
2005-01-01
Planning activities over large landscapes poses a complex of challenges when trying to balance the implementation of a conservation strategy while still allowing for a variety of consumptive and nonconsumptive uses. We examine a case in southeast Alaska to illustrate the breadth of these challenges and an approach to developing a science-based resource plan. Not only was the planning area, the Tongass National Forest, USA, exceptionally large (approximately 17 million acres or 6.9 million ha), but it also is primarily an island archipelago environment. The water system surrounding and going through much of the forest provides access to facilitate the movement of people, animals, and plants but at the same time functions as a barrier to others. This largest temperate rainforest in the world is an exceptional example of the complexity of managing at such a scale but also illustrates the role of science in the planning process. As we enter the 21st century, the list of questions needing scientific investigation has not only changed dramatically, but the character of the questions also has changed. Questions are contentious, cover broad scales in space and time, and are highly complex and interdependent. The provision of unbiased and objective information to all stakeholders is an important step in informed decision-making.
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Design and Use of a Learning Object for Finding Complex Polynomial Roots
ERIC Educational Resources Information Center
Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime
2013-01-01
Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…
Modeling the behaviour of shape memory materials under large deformations
NASA Astrophysics Data System (ADS)
Rogovoy, A. A.; Stolbova, O. S.
2017-06-01
In this study, the models describing the behavior of shape memory alloys, ferromagnetic materials and polymers have been constructed, using a formalized approach to develop the constitutive equations for complex media under large deformations. The kinematic and constitutive equations, satisfying the principles of thermodynamics and objectivity, have been derived. The application of the Galerkin procedure to the systems of equations of solid mechanics allowed us to obtain the Lagrange variational equation and variational formulation of the magnetostatics problems. These relations have been tested in the context of the problems of finite deformation in shape memory alloys and ferromagnetic materials during forward and reverse martensitic transformations and in shape memory polymers during forward and reverse relaxation transitions from a highly elastic to a glassy state.
NASA Astrophysics Data System (ADS)
Cowan, James J.
1984-05-01
A unique type of holographic imagery and its large scale replication are described. The "Newport Button", which was designed as an advertising premium item for the Newport Corporation, incorporates a complex overlay of holographic diffraction gratings surrounding a three-dimensional holographic image of a real object. The combined pattern is recorded onto a photosensitive medium from which a metal master is made. The master is subsequently used to repeatedly emboss the pattern into a thin plastic sheet. Individual patterns are then die cut from the metallized plastic and mounted onto buttons. A discussion is given of the diffraction efficiencies of holograms made in this particular fashion and of the special requirements of the replication process.
Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network
NASA Technical Reports Server (NTRS)
Navarro, Robert
2006-01-01
The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..
Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.
2013-01-01
In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
NASA Astrophysics Data System (ADS)
Song, Z. N.; Sui, H. G.
2018-04-01
High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.
Collective Behavior of Camphor Floats Migrating on the Water Surface
NASA Astrophysics Data System (ADS)
Nishimori, Hiraku; Suematsu, Nobuhiko J.; Nakata, Satoshi
2017-10-01
As simple and easily controllable objects among various self-propelled particles, camphor floats on the water surface have been widely recognized. In this paper, we introduce characteristic behaviors and discuss the background mechanism of camphor floats on water, both in isolated and non-isolated conditions. In particular, we focus on: (i) the transition of dynamical characters through bifurcations exhibited by systems with small number of camphor floats and (ii) the emergence of a rich variety of complex dynamics observed in systems with large number camphor floats, and attempt to elucidate these phenomena through mathematical modeling as well as experimental analysis. Finally, we discuss the connection of the dynamics of camphor floats to that of a wider class of complex and sophisticated dynamics exhibited by various types of self-propelled particles.
NASA Astrophysics Data System (ADS)
Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.
2014-12-01
Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
2012-01-01
Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013
Fast reconstruction of optical properties for complex segmentations in near infrared imaging
NASA Astrophysics Data System (ADS)
Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador
2017-04-01
The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.
Big data analytics as a service infrastructure: challenges, desired properties and solutions
NASA Astrophysics Data System (ADS)
Martín-Márquez, Manuel
2015-12-01
CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.
Embedding Agile Practices within a Plan-Driven Hierarchical Project Life Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millard, W. David; Johnson, Daniel M.; Henderson, John M.
2014-07-28
Organizations use structured, plan-driven approaches to provide continuity, direction, and control to large, multi-year programs. Projects within these programs vary greatly in size, complexity, level of maturity, technical risk, and clarity of the development objectives. Organizations that perform exploratory research, evolutionary development, and other R&D activities can obtain the benefits of Agile practices without losing the benefits of their program’s overarching plan-driven structure. This paper describes application of Agile development methods on a large plan-driven sensor integration program. While the client employed plan-driven, requirements flow-down methodologies, tight project schedules and complex interfaces called for frequent end-to-end demonstrations to provide feedbackmore » during system development. The development process maintained the many benefits of plan-driven project execution with the rapid prototyping, integration, demonstration, and client feedback possible through Agile development methods. This paper also describes some of the tools and implementing mechanisms used to transition between and take advantage of each methodology, and presents lessons learned from the project management, system engineering, and developer’s perspectives.« less
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Object-oriented Persistent Homology
Wang, Bao; Wei, Guo-Wei
2015-01-01
Persistent homology provides a new approach for the topological simplification of big data via measuring the life time of intrinsic topological features in a filtration process and has found its success in scientific and engineering applications. However, such a success is essentially limited to qualitative data classification and analysis. Indeed, persistent homology has rarely been employed for quantitative modeling and prediction. Additionally, the present persistent homology is a passive tool, rather than a proactive technique, for classification and analysis. In this work, we outline a general protocol to construct object-oriented persistent homology methods. By means of differential geometry theory of surfaces, we construct an objective functional, namely, a surface free energy defined on the data of interest. The minimization of the objective functional leads to a Laplace-Beltrami operator which generates a multiscale representation of the initial data and offers an objective oriented filtration process. The resulting differential geometry based object-oriented persistent homology is able to preserve desirable geometric features in the evolutionary filtration and enhances the corresponding topological persistence. The cubical complex based homology algorithm is employed in the present work to be compatible with the Cartesian representation of the Laplace-Beltrami flow. The proposed Laplace-Beltrami flow based persistent homology method is extensively validated. The consistence between Laplace-Beltrami flow based filtration and Euclidean distance based filtration is confirmed on the Vietoris-Rips complex for a large amount of numerical tests. The convergence and reliability of the present Laplace-Beltrami flow based cubical complex filtration approach are analyzed over various spatial and temporal mesh sizes. The Laplace-Beltrami flow based persistent homology approach is utilized to study the intrinsic topology of proteins and fullerene molecules. Based on a quantitative model which correlates the topological persistence of fullerene central cavity with the total curvature energy of the fullerene structure, the proposed method is used for the prediction of fullerene isomer stability. The efficiency and robustness of the present method are verified by more than 500 fullerene molecules. It is shown that the proposed persistent homology based quantitative model offers good predictions of total curvature energies for ten types of fullerene isomers. The present work offers the first example to design object-oriented persistent homology to enhance or preserve desirable features in the original data during the filtration process and then automatically detect or extract the corresponding topological traits from the data. PMID:26705370
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Direct heuristic dynamic programming for damping oscillations in a large power system.
Lu, Chao; Si, Jennie; Xie, Xiaorong
2008-08-01
This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.
Classifying Structures in the ISM with Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Beaumont, Christopher; Goodman, A. A.; Williams, J. P.
2011-01-01
The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.
Laser jetting of femto-liter metal droplets for high resolution 3D printed structures
NASA Astrophysics Data System (ADS)
Zenou, M.; Sa'Ar, A.; Kotler, Z.
2015-11-01
Laser induced forward transfer (LIFT) is employed in a special, high accuracy jetting regime, by adequately matching the sub-nanosecond pulse duration to the metal donor layer thickness. Under such conditions, an effective solid nozzle is formed, providing stability and directionality to the femto-liter droplets which are printed from a large gap in excess of 400 μm. We illustrate the wide applicability of this method by printing several 3D metal objects. First, very high aspect ratio (A/R > 20), micron scale, copper pillars in various configuration, upright and arbitrarily bent, then a micron scale 3D object composed of gold and copper. Such a digital printing method could serve the generation of complex, multi-material, micron-scale, 3D materials and novel structures.
The Gould’s Belt Very Large Array Survey. V. The Perseus Region
NASA Astrophysics Data System (ADS)
Pech, Gerardo; Loinard, Laurent; Dzib, Sergio A.; Mioduszewski, Amy J.; Rodríguez, Luis F.; Ortiz-León, Gisela N.; Rivera, Juana L.; Torres, Rosa M.; Boden, Andrew F.; Hartman, Lee; Kounkel, Marina A.; Evans, Neal J., II; Briceño, Cesar; Tobin, John; Zapata, Luis A.
2016-02-01
We present multiepoch, large-scale (˜2000 arcmin2), fairly deep (˜16 μJy), high-resolution (˜1″) radio observations of the Perseus star-forming complex obtained with the Karl G. Jansky Very Large Array at frequencies of 4.5 and 7.5 GHz. These observations were mainly focused on the clouds NGC 1333 and IC 348, although we also observed several fields in other parts of the Perseus complex. We detect a total of 206 sources, 42 of which are associated with young stellar objects (YSOs). The radio properties of about 60% of the YSOs are compatible with a nonthermal radio emission origin. Based on our sample, we find a fairly clear relation between the prevalence of nonthermal radio emission and evolutionary status of the YSOs. By comparing our results with previously reported X-ray observations, we show that YSOs in Perseus follow a Güdel-Benz relation with κ = 0.03, consistent with other regions of star formation. We argue that most of the sources detected in our observations but not associated with known YSOs are extragalactic, but provide a list of 20 unidentified radio sources whose radio properties are consistent with being YSO candidates. Finally, we also detect five sources with extended emission features that can clearly be associated with radio galaxies.
Large-eddy simulation of flow past a circular cylinder
NASA Technical Reports Server (NTRS)
Mittal, R.
1995-01-01
Some of the most challenging applications of large-eddy simulation are those in complex geometries where spectral methods are of limited use. For such applications more conventional methods such as finite difference or finite element have to be used. However, it has become clear in recent years that dissipative numerical schemes which are routinely used in viscous flow simulations are not good candidates for use in LES of turbulent flows. Except in cases where the flow is extremely well resolved, it has been found that upwind schemes tend to damp out a significant portion of the small scales that can be resolved on the grid. Furthermore, it has been found that even specially designed higher-order upwind schemes that have been used successfully in the direct numerical simulation of turbulent flows produce too much dissipation when used in conjunction with large-eddy simulation. The objective of the current study is to perform a LES of incompressible flow past a circular cylinder at a Reynolds number of 3900 using a solver which employs an energy-conservative second-order central difference scheme for spatial discretization and compare the results obtained with those of Beaudan & Moin (1994) and with the experiments in order to assess the performance of the central scheme for this relatively complex geometry.
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
O'Neill, M A; Hilgetag, C C
2001-08-29
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.
O'Neill, M A; Hilgetag, C C
2001-01-01
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702
An applet for the Gabor similarity scaling of the differences between complex stimuli.
Margalit, Eshed; Biederman, Irving; Herald, Sarah B; Yue, Xiaomin; von der Malsburg, Christoph
2016-11-01
It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
Non-symbolic halving in an Amazonian indigene group
McCrink, Koleen; Spelke, Elizabeth S.; Dehaene, Stanislas; Pica, Pierre
2014-01-01
Much research supports the existence of an Approximate Number System (ANS) that is recruited by infants, children, adults, and non-human animals to generate coarse, non-symbolic representations of number. This system supports simple arithmetic operations such as addition, subtraction, and ordering of amounts. The current study tests whether an intuition of a more complex calculation, division, exists in an indigene group in the Amazon, the Mundurucu, whose language includes no words for large numbers. Mundurucu children were presented with a video event depicting a division transformation of halving, in which pairs of objects turned into single objects, reducing the array's numerical magnitude. Then they were tested on their ability to calculate the outcome of this division transformation with other large-number arrays. The Mundurucu children effected this transformation even when non-numerical variables were controlled, performed above chance levels on the very first set of test trials, and exhibited performance similar to urban children who had access to precise number words and a surrounding symbolic culture. We conclude that a halving calculation is part of the suite of intuitive operations supported by the ANS. PMID:23587042
Automatic QRS complex detection using two-level convolutional neural network.
Xiang, Yande; Lin, Zhitao; Meng, Jianyi
2018-01-29
The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.
Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems
NASA Technical Reports Server (NTRS)
Givi, P.; Madnia, C. K.; Gicquel, L. Y. M.; Sheikhi, M. R. H.; Drozda, T. G.
2002-01-01
The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high speed reacting turbulent flows. NASA is interested in the design of various components involved in air breathing propulsion systems such as the scramjet. There is a demand for development of robust tools that can aid in the design procedure. The physics of high speed reactive flows is rich with many complexities. LES is regarded as one of the most promising means of simulating turbulent reacting flows.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
Young stellar population and star formation history ofW4 HII region/Cluster Complex
NASA Astrophysics Data System (ADS)
Panwar, Neelam
2018-04-01
The HII region/cluster complex has been a subject of numerous investigations to study the feedback effect of massive stars on their surroundings. Massive stars not only alter the morphology of the parental molecular clouds, but also influence star formation, circumstellar disks and the mass function of low-mass stars in their vicinity. However, most of the studies of low-mass stellar content of the HII regions are limited only to the nearby regions. We study the star formation in the W4 HII region using deep optical observations obtained with the archival data from Canada - France - Hawaii Telescope, Two-Micron All Sky Survey, Spitzer, Herschel and Chandra. We investigate the spatial distribution of young stellar objects in the region, their association with the remnant molecular clouds, and search for the clustering to establish the sites of recent star formation. Our analysis suggests that the influence of massive stars on circumstellar disks is significant only to thei! r immediate neighborhood. The spatial correlation of the young stars with the distribution of gas and dust of the complex indicate that the clusters would have formed in a large filamentary cloud. The observing facilities at the 3.6-m Devasthal Optical Telescope (DOT), providing high-resolution spectral and imaging capabilities, will fulfill the major objectives in the study of HII regions.
Visual Short-Term Memory Capacity for Simple and Complex Objects
ERIC Educational Resources Information Center
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-01-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…
A simple, low-cost conductive composite material for 3D printing of electronic sensors.
Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A
2012-01-01
3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.
A SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES
Evaluation and analysis of multiple objectives are very important in designing environmentally benign processes. They require a systematic procedure for solving multi-objective decision-making problems due to the complex nature of the problems and the need for complex assessment....
Environmental hazards and stress: evidence from the Texas City Stress and Health Study.
Peek, M K; Cutchin, M P; Freeman, D; Stowe, R P; Goodwin, J S
2009-10-01
Substantial research has suggested that exposure to environmental health hazards, such as polluting industrial activity, has deleterious effects on psychological and physiological well-being. However, one gap in the existing literature is comparative analysis of objective and subjective exposure's relative association with various measurable outcomes of exposure. These relationships were explored within a community sample of 2604 respondents living near a large petrochemical complex in Texas City, Texas, USA. Objective exposure was investigated using distance of residence from a cluster of petrochemical plants and subjective exposure using residents' concern about potential health effects from those plants. Regression models were then used to examine how each type of exposure predicts perceived stress, physiological markers of stress and perceived health. Results suggest that objective exposure was associated primarily with markers of physiological stress (interleukin-6 and viral reactivation), and subjective exposure (concern about petrochemical health risk) was associated with variables assessing perceived health. From the analysis, it can be inferred that, in the context of an environmental hazard of this type, subjective exposure may be at least as important a predictor of poor health outcomes as objective exposure.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Voting based object boundary reconstruction
NASA Astrophysics Data System (ADS)
Tian, Qi; Zhang, Like; Ma, Jingsheng
2005-07-01
A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people"s attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d*etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
NASA Astrophysics Data System (ADS)
Pantale, O.; Caperaa, S.; Rakotomalala, R.
2004-07-01
During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Rui; Luo, Ali; Liu, Jiaming
2016-06-01
The crystalline silicate features are mainly reflected in infrared bands. The Spitzer Infrared Spectrograph (IRS) collected numerous spectra of various objects and provided a big database to investigate crystalline silicates in a wide range of astronomical environments. We apply the manifold ranking algorithm to perform a systematic search for the spectra with crystalline silicate features in the Spitzer IRS Enhanced Products available. In total, 868 spectra of 790 sources are found to show the features of crystalline silicates. These objects are cross-matched with the SIMBAD database as well as with the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST)/DR2. Themore » average spectrum of young stellar objects shows a variety of features dominated either by forsterite or enstatite or neither, while the average spectrum of evolved objects consistently present dominant features of forsterite in AGB, OH/IR, post-AGB, and planetary nebulae. They are identified optically as early-type stars, evolved stars, galaxies and so on. In addition, the strength of spectral features in typical silicate complexes is calculated. The results are available through CDS for the astronomical community to further study crystalline silicates.« less
NASA Astrophysics Data System (ADS)
Mériaudeau, Fabrice; Rantoson, Rindra; Fofi, David; Stolz, Christophe
2012-04-01
Fashion and design greatly influence the conception of manufactured products which now exhibit complex forms and shapes. Two-dimensional quality control procedures (e.g., shape, textures, colors, and 2D geometry) are progressively being replaced by 3D inspection methods (e.g., 3D geometry, colors, and texture on the 3D shape) therefore requiring a digitization of the object surface. Three dimensional surface acquisition is a topic which has been studied to a large extent, and a significant number of techniques for acquiring 3D shapes has been proposed, leading to a wide range of commercial solutions available on the market. These systems cover a wide range from micro-scale objects such as shape from focus and shape from defocus techniques, to several meter sized objects (time of flight technique). Nevertheless, the use of such systems still encounters difficulties when dealing with non-diffuse (non Lambertian) surfaces as is the case for transparent, semi-transparent, or highly reflective materials (e.g., glass, crystals, plastics, and shiny metals). We review and compare various systems and approaches which were recently developed for 3D digitization of transparent objects.
Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications
NASA Astrophysics Data System (ADS)
Gombkötő, Balázs; Kornis, János
2010-04-01
Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.
Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Ballard, Christopher C.; Esty, C. Clark; Egolf, David A.
2016-11-01
Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.
Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation.
Ballard, Christopher C; Esty, C Clark; Egolf, David A
2016-11-01
Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.
Climate Modeling with a Million CPUs
NASA Astrophysics Data System (ADS)
Tobis, M.; Jackson, C. S.
2010-12-01
Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.
Object Recognition and Localization: The Role of Tactile Sensors
Aggarwal, Achint; Kirchner, Frank
2014-01-01
Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087
Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects
NASA Astrophysics Data System (ADS)
Kuster, Martin; de Vries, Diemer
2006-12-01
It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.
A novel surface registration algorithm with biomedical modeling applications.
Huang, Heng; Shen, Li; Zhang, Rong; Makedon, Fillia; Saykin, Andrew; Pearlman, Justin
2007-07-01
In this paper, we propose a novel surface matching algorithm for arbitrarily shaped but simply connected 3-D objects. The spherical harmonic (SPHARM) method is used to describe these 3-D objects, and a novel surface registration approach is presented. The proposed technique is applied to various applications of medical image analysis. The results are compared with those using the traditional method, in which the first-order ellipsoid is used for establishing surface correspondence and aligning objects. In these applications, our surface alignment method is demonstrated to be more accurate and flexible than the traditional approach. This is due in large part to the fact that a new surface parameterization is generated by a shortcut that employs a useful rotational property of spherical harmonic basis functions for a fast implementation. In order to achieve a suitable computational speed for practical applications, we propose a fast alignment algorithm that improves computational complexity of the new surface registration method from O(n3) to O(n2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pagliarulo, V., E-mail: v.pagliarulo@isasi.cnr.it; Ferraro, P.; Lopresto, V.
2016-06-28
The aim of this paper is to investigate the ability of two different interferometric NDT techniques to detect and evaluate barely visible impact damage on composite laminates. The interferometric techniques allow to investigate large and complex structures. Electronic Speckle Pattern Interferometry (ESPI) works through real-time surface illumination by visible laser (i.e. 532 nm) and the range and the accuracy are related to the wavelength. While the ESPI works with the “classic” holographic configuration, that is reference beam and object beam, the Shearography uses the object image itself as reference: two object images are overlapped creating a shear image. This makes themore » method much less sensitive to external vibrations and noise but with one difference, it measures the first derivative of the displacement. In this work, different specimens at different impact energies have been investigated by means of both methods. The delaminated areas have been estimated and compared.« less
Variability Analysis: Detection and Classification
NASA Astrophysics Data System (ADS)
Eyer, L.
2005-01-01
The Gaia mission will offer an exceptional opportunity to perform variability studies. The data homogeneity, its optimised photometric systems, composed of 11 medium and 4-5 broad bands, the high photometric precision in G band of one milli-mag for V = 13-15, the radial velocity measurements and the exquisite astrometric precision for one billion stars will permit a detailed description of variable objects like stars, quasars and asteroids. However the time sampling and the total number of measurements change from one object to another because of the satellite scanning law. The data analysis is a challenge because of the huge amount of data, the complexity of the observed objects and the peculiarities of the satellite, and needs thorough preparation. Experience can be gained by the study of past and present survey analyses and results, and Gaia should be put in perspective with the future large scale surveys, like PanSTARRS or LSST. We present the activities of the Variable Star Working Group and a general plan to digest this unprecedented data set, focusing here on the photometry.
Hilbig, Benjamin E; Erdfelder, Edgar; Pohl, Rüdiger F
2011-07-01
A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency-that is, the speed with which objects are recognized-will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has remained largely untested due to methodological difficulties. To overcome the latter, we propose a measurement model from the class of multinomial processing tree models that can estimate true single-cue reliance on recognition and retrieval fluency. We applied this model to aggregate and individual data from a probabilistic inference experiment and considered both goodness of fit and model complexity to evaluate different hypotheses. The results were relatively clear-cut, revealing that the fluency heuristic is an unlikely candidate for describing comparative judgments concerning recognized objects. These findings are discussed in light of a broader theoretical view on the interplay of memory and judgment processes.
Better Spectrometers, Beautiful Spectra and Confusion for All
NASA Technical Reports Server (NTRS)
Pearson, J. C.; Brauer, C. S.; Drouin, B. J.; Yu, S.
2009-01-01
The confluence of enormous improvements in submillimeter receivers and the development of powerful large scale observatories is about to force astrophysics and the sciences that support it to develop novel approaches for interpretation of data. The historical method of observing one or two lines and carefully analyzing them in the context of a simple model is now only applicable for distant objects where only a few lines are strong enough to be observable. Modern observatories collect many GHz of high signal-to-noise spectra in a single observation and in many cases, at sufficiently high spatial resolution to start resolving chemically distinct regions. The observatories planned for the near future and the inevitable upgrades of existing facilities will make large spectral data sets the rule rather than the exception in many areas of molecular astrophysics. The methodology and organization required to fully extract the available information and interpret these beautiful spectra represents a challenge to submillimeter astrophysics similar in magnitude to the last few decades of effort in improving receivers. The quality and abundance of spectra effectively prevents line-by-line analysis from being a time efficient proposition, however, global analysis of complex spectra is a science in its infancy. Spectroscopy at several other wavelengths have developed a number of techniques to analyze complex spectra, which can provide a great deal of guidance to the molecular astrophysics community on how to attack the complex spectrum problem. Ultimately, the challenge is one of organization, similar to building observatories, requiring teams of specialists combining their knowledge of dynamical, structural, chemical and radiative models with detailed knowledge in molecular physics and gas and grain surface chemistry to extract and exploit the enormous information content of complex spectra. This paper presents a spectroscopists view of the necessary elements in a tool for complex spectral analysis.
Knowledge base rule partitioning design for CLIPS
NASA Technical Reports Server (NTRS)
Mainardi, Joseph D.; Szatkowski, G. P.
1990-01-01
This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.
NASA Astrophysics Data System (ADS)
Barnuevo, Abner; Asaeda, Takashi; Sanjaya, Kelum; Kanesaka, Yoshikazu; Fortes, Miguel
2017-11-01
Mangrove rehabilitation programs received much attention in the past decades as a response to widespread global degradation. While the documented successes and failures of mangrove rehabilitation accomplishments were varied, the objective and scheme is common, mainly focused on planting and creating monospecific plantations. This study assessed the structural development and complexity of the large-scale plantations in the central part of Philippines and compared it with the adjacent natural stand as reference. Our study showed that planted forest in both sites had lower structural complexity than the reference natural forest. Between sites, secondary succession in the monospecific plantation in Banacon Island was inhibited as reflected by low regeneration potential, whereas recruitment and colonization of non-planted species was promoted in Olango Island. Even 60 years after the forest was created in Banacon Island, it still lacked the understory of young cohorts which together comprise the regeneration potential that can supposedly add to the structural complexity. Although a potential seed source from adjacent natural forest is available, recruitment and colonization of non-planted species did not progress. MDS analysis of tree density data showed clustering of planted forest from the natural stand. The average SIMPER dissimilarity was 79.9% and the species with highest contributions were R. stylosa (74.6%), S. alba (11.1%) and A. marina (10.6%). Within the natural forest, the same species had the highest dissimilarity contribution, whereas in the planted forest, only R. stylosa contributed the highest dissimilarity. The same trend was also revealed in the MDS ordination analysis of diameter at breast height (DBH). A one-way ANOSIM permutation test of the density and DBH showed a significant difference between the planted and natural forests. Thus, as part of silviculture management intervention, the current practices of mangrove reforestation needs to be reviewed and evaluated to determine the trajectories of its conservation objectives to achieve the best outcome and functionality of the restored habitat.
Similarity, not complexity, determines visual working memory performance.
Jackson, Margaret C; Linden, David E J; Roberts, Mark V; Kriegeskorte, Nikolaus; Haenschel, Corinna
2015-11-01
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased perceptual similarity between complex items as a result of a large amount of overlapping information. Increased similarity is thought to lead to greater comparison errors between items encoded into WM and the test item(s) presented at retrieval. However, previous studies have used different object categories to manipulate complexity and similarity, raising questions as to whether these effects are simply due to cross-category differences. For the first time, here the relationship between complexity and similarity in WM using the same stimulus category (abstract polygons) are investigated. The authors used a delayed discrimination task to measure WM for 1-4 complex versus simple simultaneously presented items and manipulated the similarity between the single test item at retrieval and the sample items at encoding. WM was poorer for complex than simple items only when the test item was similar to 1 of the encoding items, and not when it was dissimilar or identical. The results provide clear support for reinterpretation of the complexity effect in WM as a similarity effect and highlight the importance of the retrieval stage in governing WM performance. The authors discuss how these findings can be reconciled with current models of WM capacity limits. (c) 2015 APA, all rights reserved).
Bateni, Hamid; Zecevic, Aleksandra; McIlroy, William E; Maki, Brian E
2004-07-01
The ability to reach and "grasp" (grip or touch) structures for support in reaction to instability is an important element of the postural repertoire. It is unclear, however, how the central nervous system (CNS) resolves the potential conflict between holding an object and the need to release the held object and grasp alternative support, particularly if the held object is perceived to be relevant to the task of stabilizing the body, e.g. an assistive device. This study examined whether compensatory grasping is inhibited when holding an object, and whether the influence differs when holding an assistive device (cane) versus a task-irrelevant object (top handle portion of a cane). We also investigated the influence of preloading the assistive device, to determine whether conflicting demands for arm-muscle activation (requiring disengagement of ongoing agonist or antagonist activity) would influence the inhibition of compensatory grasping. Unpredictable forward and backward platform translations were used to evoke the balancing reactions in 16 healthy young adults. A handrail was mounted to the right and foot motion was constrained by barriers, with the intent that successful balance recovery would (in large-perturbation trials) require subjects to release the held object and contact the rail with the right hand. Results showed that grasping reactions were commonly used to recover equilibrium when the hand was free (rail contact in 71% of large-perturbation trials). However, holding either the cane or canetop had a potent modulating effect: although early biceps activation was almost never inhibited completely (significant activity within 200 ms in 98% of trials), the average activation amplitude was attenuated by 30-64% and the average frequency of handrail contact was reduced by a factor of two or more. This reduced use of the rail occurred even though the consequence often involved falling against a safety harness or barriers. Handrail contact occurred least frequently when holding the cane during forward loss of balance: subjects persisted in pushing on the cane (failing to use the rail) in 93% of trials, even when the perturbations were too large to allow this strategy to be successful. Prior contraction (preloading the cane) did not influence any of these findings. Complex strategies (e.g. partial release of object) were often adopted to allow balance to be recovered without dropping the held object. Remarkably, it appears that the CNS may give priority to the ongoing task of holding an object, even when it has no stabilizing value (cane during backward falls) or any intrinsic value whatsoever (canetop).
Learning viewpoint invariant object representations using a temporal coherence principle.
Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter
2005-07-01
Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
Colloidal assembly directed by virtual magnetic moulds
NASA Astrophysics Data System (ADS)
Demirörs, Ahmet F.; Pillai, Pramod P.; Kowalczyk, Bartlomiej; Grzybowski, Bartosz A.
2013-11-01
Interest in assemblies of colloidal particles has long been motivated by their applications in photonics, electronics, sensors and microlenses. Existing assembly schemes can position colloids of one type relatively flexibly into a range of desired structures, but it remains challenging to produce multicomponent lattices, clusters with precisely controlled symmetries and three-dimensional assemblies. A few schemes can efficiently produce complex colloidal structures, but they require system-specific procedures. Here we show that magnetic field microgradients established in a paramagnetic fluid can serve as `virtual moulds' to act as templates for the assembly of large numbers (~108) of both non-magnetic and magnetic colloidal particles with micrometre precision and typical yields of 80 to 90 per cent. We illustrate the versatility of this approach by producing single-component and multicomponent colloidal arrays, complex three-dimensional structures and a variety of colloidal molecules from polymeric particles, silica particles and live bacteria and by showing that all of these structures can be made permanent. In addition, although our magnetic moulds currently resemble optical traps in that they are limited to the manipulation of micrometre-sized objects, they are massively parallel and can manipulate non-magnetic and magnetic objects simultaneously in two and three dimensions.
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Information Technology in Complex Health Services
Southon, Frank Charles Gray; Sauer, Chris; Dampney, Christopher Noel Grant (Kit)
1997-01-01
Abstract Objective: To identify impediments to the successful transfer and implementation of packaged information systems through large, divisionalized health services. Design: A case analysis of the failure of an implementation of a critical application in the Public Health System of the State of New South Wales, Australia, was carried out. This application had been proven in the United States environment. Measurements: Interviews involving over 60 staff at all levels of the service were undertaken by a team of three. The interviews were recorded and analyzed for key themes, and the results were shared and compared to enable a continuing critical assessment. Results: Two components of the transfer of the system were considered: the transfer from a different environment, and the diffusion throughout a large, divisionalized organization. The analyses were based on the Scott-Morton organizational fit framework. In relation to the first, it was found that there was a lack of fit in the business environments and strategies, organizational structures and strategy-structure pairing as well as the management process-roles pairing. The diffusion process experienced problems because of the lack of fit in the strategy-structure, strategy-structure-management processes, and strategy-structure-role relationships. Conclusion: The large-scale developments of integrated health services present great challenges to the efficient and reliable implementation of information technology, especially in large, divisionalized organizations. There is a need to take a more sophisticated approach to understanding the complexities of organizational factors than has traditionally been the case. PMID:9067877
Optimizing liquid effluent monitoring at a large nuclear complex.
Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M
2003-12-01
Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.
Photoinduced energy transfer in transition metal complex oligomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-04-01
The work we have done over the past three years has been directed toward the preparation, characterization and photophysical examination of mono- and bimetallic diimine complexes. The work is part of a broader project directed toward the development of stable, efficient, light harvesting arrays of transition metal complex chromophores. One focus has been the synthesis of rigid bis-bidentate and bis-tridentate bridging ligands. We have managed to make the ligand bphb in multigram quantities from inexpensive starting materials. The synthetic approach used has allowed us prepare a variety of other ligands which may have unique applications (vide infra). We have prepared,more » characterized and examined the photophysical behavior of Ru(II) and Re(I) complexes of the ligands. Energy donor/acceptor complexes of bphb have been prepared which exhibit nearly activationless energy transfer. Complexes of Ru(II) and Re(I) have also been prepared with other polyunsaturated ligands in which two different long lived ( > 50 ns) excited states exist; results of luminescence and transient absorbance measurements suggest the two states are metal-to-ligand charge transfer and ligand localized {pi}{r_arrow}{pi}* triplets. Finally, we have developed methods to prepare polymetallic complexes which are covalently bound to various surfaces. The long term objective of this work is to make light harvesting arrays for the sensitization of large band gap semiconductors. Details of this work are provided in the body of the report.« less
Nguyen, Jillian; Majmudar, Ushma V; Ravaliya, Jay H; Papathomas, Thomas V; Torres, Elizabeth B
2015-01-01
Recently, movement variability has been of great interest to motor control physiologists as it constitutes a physical, quantifiable form of sensory feedback to aid in planning, updating, and executing complex actions. In marked contrast, the psychological and psychiatric arenas mainly rely on verbal descriptions and interpretations of behavior via observation. Consequently, a large gap exists between the body's manifestations of mental states and their descriptions, creating a disembodied approach in the psychological and neural sciences: contributions of the peripheral nervous system to central control, executive functions, and decision-making processes are poorly understood. How do we shift from a psychological, theorizing approach to characterize complex behaviors more objectively? We introduce a novel, objective, statistical framework, and visuomotor control paradigm to help characterize the stochastic signatures of minute fluctuations in overt movements during a visuomotor task. We also quantify a new class of covert movements that spontaneously occur without instruction. These are largely beneath awareness, but inevitably present in all behaviors. The inclusion of these motions in our analyses introduces a new paradigm in sensory-motor integration. As it turns out, these movements, often overlooked as motor noise, contain valuable information that contributes to the emergence of different kinesthetic percepts. We apply these new methods to help better understand perception-action loops. To investigate how perceptual inputs affect reach behavior, we use a depth inversion illusion (DII): the same physical stimulus produces two distinct depth percepts that are nearly orthogonal, enabling a robust comparison of competing percepts. We find that the moment-by-moment empirically estimated motor output variability can inform us of the participants' perceptual states, detecting physiologically relevant signals from the peripheral nervous system that reveal internal mental states evoked by the bi-stable illusion. Our work proposes a new statistical platform to objectively separate changes in visual perception by quantifying the unfolding of movement, emphasizing the importance of including in the motion analyses all overt and covert aspects of motor behavior.
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
NASA Astrophysics Data System (ADS)
Kounkel, Marina; Hartmann, Lee; Loinard, Laurent; Ortiz-León, Gisela N.; Mioduszewski, Amy J.; Rodríguez, Luis F.; Dzib, Sergio A.; Torres, Rosa M.; Pech, Gerardo; Galli, Phillip A. B.; Rivera, Juana L.; Boden, Andrew F.; Evans, Neal J., II; Briceño, Cesar; Tobin, John J.
2017-01-01
We present the results of the Gould’s Belt Distances Survey of young star-forming regions toward the Orion Molecular Cloud Complex. We detected 36 young stellar objects (YSOs) with the Very Large Baseline Array, 27 of which have been observed in at least three epochs over the course of two years. At least half of these YSOs belong to multiple systems. We obtained parallax and proper motions toward these stars to study the structure and kinematics of the Complex. We measured a distance of 388 ± 5 pc toward the Orion Nebula Cluster, 428 ± 10 pc toward the southern portion L1641, 388 ± 10 pc toward NGC 2068, and roughly ˜420 pc toward NGC 2024. Finally, we observed a strong degree of plasma radio scattering toward λ Ori.
Topicality and Complexity in the Acquisition of Norwegian Object Shift
ERIC Educational Resources Information Center
Anderssen, Merete; Bentzen, Kristine; Rodina, Yulia
2012-01-01
This article investigates the acquisition of object shift in Norwegian child language. We show that object shift is complex derivationally, distributionally, and referentially, and propose a new analysis in terms of IP-internal topicalization. The results of an elicited production study with 27 monolingual Norwegian-speaking children (ages…
Translational Systems Biology and Voice Pathophysiology
Li, Nicole Y. K.; Abbott, Katherine Verdolini; Rosen, Clark; An, Gary; Hebda, Patricia A.; Vodovotz, Yoram
2011-01-01
Objectives/Hypothesis Personalized medicine has been called upon to tailor healthcare to an individual's needs. Evidence-based medicine (EBM) has advocated using randomized clinical trials with large populations to evaluate treatment effects. However, due to large variations across patients, the results are likely not to apply to an individual patient. We suggest that a complementary, systems biology approach using computational modeling may help tackle biological complexity in order to improve ultimate patient care. The purpose of the article is: 1) to review the pros and cons of EBM, and 2) to discuss the alternative systems biology method and present its utility in clinical voice research. Study Design Tutorial Methods Literature review and discussion. Results We propose that translational systems biology can address many of the limitations of EBM pertinent to voice and other health care domains, and thus complement current health research models. In particular, recent work using mathematical modeling suggests that systems biology has the ability to quantify the highly complex biologic processes underlying voice pathophysiology. Recent data support the premise that this approach can be applied specifically in the case of phonotrauma and surgically induced vocal fold trauma, and may have particular power to address personalized medicine. Conclusions We propose that evidence around vocal health and disease be expanded beyond a population-based method to consider more fully issues of complexity and systems interactions, especially in implementing personalized medicine in voice care and beyond. PMID:20025041
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
[Influence of mental rotation of objects on psychophysiological functions of women].
Chikina, L V; Fedorchuk, S V; Trushina, V A; Ianchuk, P I; Makarchuk, M Iu
2012-01-01
An integral part of activity of modern human beings is an involvement to work with the computer systems which, in turn, produces a nervous - emotional tension. Hence, a problem of control of the psychophysiological state of workmen with the purpose of health preservation and success of their activity and the problem of application of rehabilitational actions are actual. At present it is known that the efficiency of rehabilitational procedures rises following application of the complex of regenerative programs. Previously performed by us investigation showed that mental rotation is capable to compensate the consequences of a nervous - emotional tension. Therefore, in the present work we investigated how the complex of spatial tasks developed by us influences psychophysiological performances of tested women for which the psycho-emotional tension with the usage of computer technologies is more essential, and the procedure of mental rotation is more complex task for them, than for men. The complex of spatial tasks applied in the given work included: mental rotation of simple objects (letters and digits), mental rotation of complex objects (geometrical figures) and mental rotation of complex objects with the usage of a short-term memory. Execution of the complex of spatial tasks reduces the time of simple and complex sensomotor response, raises parameters of a short-term memory, brain work capacity and improves nervous processes. Collectively, mental rotation of objects can be recommended as a rehabilitational resource for compensation of consequences of any psycho-emotional strain, both for men, and for women.
Lensfree Computational Microscopy Tools and their Biomedical Applications
NASA Astrophysics Data System (ADS)
Sencan, Ikbal
Conventional microscopy has been a revolutionary tool for biomedical applications since its invention several centuries ago. Ability to non-destructively observe very fine details of biological objects in real time enabled to answer many important questions about their structures and functions. Unfortunately, most of these advance microscopes are complex, bulky, expensive, and/or hard to operate, so they could not reach beyond the walls of well-equipped laboratories. Recent improvements in optoelectronic components and computational methods allow creating imaging systems that better fulfill the specific needs of clinics or research related biomedical applications. In this respect, lensfree computational microscopy aims to replace bulky and expensive optical components with compact and cost-effective alternatives through the use of computation, which can be particularly useful for lab-on-a-chip platforms as well as imaging applications in low-resource settings. Several high-throughput on-chip platforms are built with this approach for applications including, but not limited to, cytometry, micro-array imaging, rare cell analysis, telemedicine, and water quality screening. The lack of optical complexity in these lensfree on-chip imaging platforms is compensated by using computational techniques. These computational methods are utilized for various purposes in coherent, incoherent and fluorescent on-chip imaging platforms e.g. improving the spatial resolution, to undo the light diffraction without using lenses, localization of objects in a large volume and retrieval of the phase or the color/spectral content of the objects. For instance, pixel super resolution approaches based on source shifting are used in lensfree imaging platforms to prevent under sampling, Bayer pattern, and aliasing artifacts. Another method, iterative phase retrieval, is utilized to compensate the lack of lenses by undoing the diffraction and removing the twin image noise of in-line holograms. This technique enables recovering the complex optical field from its intensity measurement(s) by using additional constraints in iterations, such as spatial boundaries and other known properties of objects. Another computational tool employed in lensfree imaging is compressive sensing (or decoding), which is a novel method taking advantage of the fact that natural signals/objects are mostly sparse or compressible in known bases. This inherent property of objects enables better signal recovery when the number of measurement is low, even below the Nyquist rate, and increases the additive noise immunity of the system.
Near-infrared colors of minor planets recovered from VISTA-VHS survey (MOVIS)
NASA Astrophysics Data System (ADS)
Popescu, M.; Licandro, J.; Morate, D.; de León, J.; Nedelcu, D. A.; Rebolo, R.; McMahon, R. G.; Gonzalez-Solares, E.; Irwin, M.
2016-06-01
Context. The Sloan Digital Sky Survey (SDSS) and Wide-field Infrared Survey Explorer (WISE) provide information about the surface composition of about 100 000 minor planets. The resulting visible colors and albedos enabled us to group them in several major classes, which are a simplified view of the diversity shown by the few existing spectra. A large set of data in the 0.8-2.5 μm, where wide spectral features are expected, is required to refine and complement the global picture of these small bodies of the solar system. Aims: We aim to obtain the near-infrared colors for a large sample of solar system objects using the observations made during the VISTA-VHS survey. Methods: We performed a serendipitous search in VISTA-VHS observations using a pipeline developed to retrieve and process the data that corresponds to solar system objects (SSo). The resulting photometric data is analyzed using color-color plots and by comparison with the known spectral properties of asteroids. Results: The colors and the magnitudes of the minor planets observed by the VISTA survey are compiled into three catalogs that are available online: the detections catalog (MOVIS-D), the magnitudes catalog (MOVIS-M), and the colors catalog (MOVIS-C). They were built using the third data release of the survey (VISTA VHS-DR3). A total of 39 947 objects were detected, including 52 NEAs, 325 Mars Crossers, 515 Hungaria asteroids, 38 428 main-belt asteroids, 146 Cybele asteroids, 147 Hilda asteroids, 270 Trojans, 13 comets, 12 Kuiper Belt objects and Neptune with its four satellites. The colors found for asteroids with known spectral properties reveal well-defined patterns corresponding to different mineralogies. The distributions of MOVIS-C data in color-color plots shows clusters identified with different taxonomic types. All the diagrams that use (Y - J) color separate the spectral classes more effectively than the (J - H) and (H - Ks) plots used until now: even for large color errors (<0.1), the plots (Y - J) vs. (Y - Ks) and (Y - J) vs. (J - Ks) provide the separation between S-complex and C-complex. The end members A, D, R, and V-types occupy well-defined regions. The catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/591/A115
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naumann, Axel; /CERN; Canal, Philippe
2008-01-01
High performance computing with a large code base and C++ has proved to be a good combination. But when it comes to storing data, C++ is a problematic choice: it offers no support for serialization, type definitions are amazingly complex to parse, and the dependency analysis (what does object A need to be stored?) is incredibly difficult. Nevertheless, the LHC data consists of C++ objects that are serialized with help from ROOT's reflection database and interpreter CINT. The fact that we can do it on that scale, and the performance with which we do it makes this approach unique andmore » stirs interest even outside HEP. I will show how CINT collects and stores information about C++ types, what the current major challenges are (dictionary size), and what CINT and ROOT have done and plan to do about it.« less
Josef, Noam; Amodio, Piero; Fiorito, Graziano; Shashar, Nadav
2012-01-01
Living under intense predation pressure, octopuses evolved an effective and impressive camouflaging ability that exploits features of their surroundings to enable them to “blend in.” To achieve such background matching, an animal may use general resemblance and reproduce characteristics of its entire surroundings, or it may imitate a specific object in its immediate environment. Using image analysis algorithms, we examined correlations between octopuses and their backgrounds. Field experiments show that when camouflaging, Octopus cyanea and O. vulgaris base their body patterns on selected features of nearby objects rather than attempting to match a large field of view. Such an approach enables the octopus to camouflage in partly occluded environments and to solve the problem of differences in appearance as a function of the viewing inclination of the observer. PMID:22649542
Experience moderates overlap between object and face recognition, suggesting a common ability
Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.
2014-01-01
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021
Experience moderates overlap between object and face recognition, suggesting a common ability.
Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E
2014-07-03
Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.
NASA Astrophysics Data System (ADS)
Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.
2018-04-01
The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.
Stereotyped behavior of severely disabled children in classroom and free-play settings.
Thompson, T J; Berkson, G
1985-05-01
The relationships between stereotyped behavior, object manipulation, self-manipulation, teacher attention, and various developmental measures were examined in 101 severely developmentally disabled children in their classrooms and a free-play setting. Stereotyped behavior without objects was positively correlated with self-manipulation and CA and was negatively correlated with complex object manipulation, developmental age, developmental quotient, and teacher attention. Stereotyped behavior with objects was negatively correlated with complex object manipulation. Partial correlations showed that age, self-manipulation, and developmental age shared unique variance with stereotyped behavior without objects.
Clustering analysis of line indices for LAMOST spectra with AstroStat
NASA Astrophysics Data System (ADS)
Chen, Shu-Xin; Sun, Wei-Min; Yan, Qi
2018-06-01
The application of data mining in astronomical surveys, such as the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) survey, provides an effective approach to automatically analyze a large amount of complex survey data. Unsupervised clustering could help astronomers find the associations and outliers in a big data set. In this paper, we employ the k-means method to perform clustering for the line index of LAMOST spectra with the powerful software AstroStat. Implementing the line index approach for analyzing astronomical spectra is an effective way to extract spectral features for low resolution spectra, which can represent the main spectral characteristics of stars. A total of 144 340 line indices for A type stars is analyzed through calculating their intra and inter distances between pairs of stars. For intra distance, we use the definition of Mahalanobis distance to explore the degree of clustering for each class, while for outlier detection, we define a local outlier factor for each spectrum. AstroStat furnishes a set of visualization tools for illustrating the analysis results. Checking the spectra detected as outliers, we find that most of them are problematic data and only a few correspond to rare astronomical objects. We show two examples of these outliers, a spectrum with abnormal continuumand a spectrum with emission lines. Our work demonstrates that line index clustering is a good method for examining data quality and identifying rare objects.
Jorm, Christine; Nisbet, Gillian; Roberts, Chris; Gordon, Christopher; Gentilcore, Stacey; Chen, Timothy F
2016-08-08
More and better interprofessional practice is predicated to be necessary to deliver good care to the patients of the future. However, universities struggle to create authentic learning activities that enable students to experience the dynamic interprofessional interactions common in healthcare and that can accommodate large interprofessional student cohorts. We investigated a large-scale mandatory interprofessional learning (IPL) activity for health professional students designed to promote social learning. A mixed methods research approach determined feasibility, acceptability and the extent to which student IPL outcomes were met. We developed an IPL activity founded in complexity theory to prepare students for future practice by engaging them in a self-directed (self-organised) learning activity with a diverse team, whose assessable products would be emergent creations. Complicated but authentic clinical cases (n = 12) were developed to challenge student teams (n = 5 or 6). Assessment consisted of a written management plan (academically marked) and a five-minute video (peer marked) designed to assess creative collaboration as well as provide evidence of integrated collective knowledge; the cohesive patient-centred management plan. All students (including the disciplines of diagnostic radiology, exercise physiology, medicine, nursing, occupational therapy, pharmacy, physiotherapy and speech pathology), completed all tasks successfully. Of the 26 % of students who completed the evaluation survey, 70 % agreed or strongly agreed that the IPL activity was worthwhile, and 87 % agreed or strongly agreed that their case study was relevant. Thematic analysis found overarching themes of engagement and collaboration-in-action suggesting that the IPL activity enabled students to achieve the intended learning objectives. Students recognised the contribution of others and described negotiation, collaboration and creation of new collective knowledge after working together on the complicated patient case studies. The novel video assessment was challenging to many students and contextual issues limited engagement for some disciplines. We demonstrated the feasibility and acceptability of a large scale IPL activity where design of cases, format and assessment tasks was founded in complexity theory. This theoretically based design enabled students to achieve complex IPL outcomes relevant to future practice. Future research could establish the psychometric properties of assessments of student performance in large-scale IPL events.
It's the Physics: Organized Complexity in the Arctic/Midlatitude Weather Controversy
NASA Astrophysics Data System (ADS)
Overland, J. E.; Francis, J. A.; Wang, M.
2017-12-01
There is intense scientific and public interest in whether major Arctic changes can and will impact mid-latitude weather. Despite numerous workshops and a growing literature, convergence of understanding is lacking, with major objections about possible large impacts within the scientific community. Yet research on the Arctic as a new potential driver in improving subseasonal forecasting at midlatitudes remains a priority. A recent review laid part of the controversy on shortcomings in experimental design and ill-suited metrics, such as examining the influence of only sea-ice loss rather than overall Arctic temperature amplification, and/or calculating averages over large regions, long time periods, or many ensemble members that would tend to obscure event-like Arctic connections. The present analysis lays the difficulty at a deeper level owing to the inherently complex physics. Jet-stream dynamics and weather linkages on the scale of a week to months has characteristics of an organized complex system, with large-scale processes that operate in patterned, quasi-geostrophic ways but whose component feedbacks are continually changing. Arctic linkages may be state dependent, i.e., relationships may be more robust in one atmospheric wave pattern than another, generating intermittency. The observational network is insufficient to fully initialize such a system and the inherent noise obscures linkage signals, leading to an underdetermined problem; often more than one explanation can fit the data. Further, the problem may be computationally irreducible; the only way to know the result of these interactions is to trace out their path over time. Modeling is a suggested approach, but at present it is unclear whether previous model studies fully resolve anticipated complexity. The jet stream from autumn to early winter is characterized by non-linear interactions among enhanced atmospheric planetary waves, irregular transitions between the zonal and meridional flows, and the maintenance of atmospheric blocks (near stationary large amplitude atmospheric waves). For weather forecast improvement, but not necessarily to elucidate mechanism of linkages, a Numerical Weather Prediction (NWP) approach is appropriate; such is the plan for the upcoming Year of Polar Prediction (YOPP).
Round, Jeff; Drake, Robyn; Kendall, Edward; Addicott, Rachael; Agelopoulos, Nicky; Jones, Louise
2015-01-01
Objectives We report the use of difference in differences (DiD) methodology to evaluate a complex, system-wide healthcare intervention. We use the worked example of evaluating the Marie Curie Delivering Choice Programme (DCP) for advanced illness in a large urban healthcare economy. Methods DiD was selected because a randomised controlled trial was not feasible. The method allows for before and after comparison of changes that occur in an intervention site with a matched control site. This enables analysts to control for the effect of the intervention in the absence of a local control. Any policy, seasonal or other confounding effects over the test period are assumed to have occurred in a balanced way at both sites. Data were obtained from primary care trusts. Outcomes were place of death, inpatient admissions, length of stay and costs. Results Small changes were identified between pre- and post-DCP outputs in the intervention site. The proportion of home deaths and median cost increased slightly, while the number of admissions per patient and the average length of stay per admission decreased slightly. None of these changes was statistically significant. Conclusions Effects estimates were limited by small numbers accessing new services and selection bias in sample population and comparator site. In evaluating the effect of a complex healthcare intervention, the choice of analysis method and output measures is crucial. Alternatives to randomised controlled trials may be required for evaluating large scale complex interventions and the DiD approach is suitable, subject to careful selection of measured outputs and control population. PMID:24644163
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.
Representing and Learning Complex Object Interactions
Zhou, Yilun; Konidaris, George
2017-01-01
We present a framework for representing scenarios with complex object interactions, in which a robot cannot directly interact with the object it wishes to control, but must instead do so via intermediate objects. For example, a robot learning to drive a car can only indirectly change its pose, by rotating the steering wheel. We formalize such complex interactions as chains of Markov decision processes and show how they can be learned and used for control. We describe two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game, and using a hot water dispenser to heat a cup of water. PMID:28593181
NASA Astrophysics Data System (ADS)
Steinberg, P. D.; Bednar, J. A.; Rudiger, P.; Stevens, J. L. R.; Ball, C. E.; Christensen, S. D.; Pothina, D.
2017-12-01
The rich variety of software libraries available in the Python scientific ecosystem provides a flexible and powerful alternative to traditional integrated GIS (geographic information system) programs. Each such library focuses on doing a certain set of general-purpose tasks well, and Python makes it relatively simple to glue the libraries together to solve a wide range of complex, open-ended problems in Earth science. However, choosing an appropriate set of libraries can be challenging, and it is difficult to predict how much "glue code" will be needed for any particular combination of libraries and tasks. Here we present a set of libraries that have been designed to work well together to build interactive analyses and visualizations of large geographic datasets, in standard web browsers. The resulting workflows run on ordinary laptops even for billions of data points, and easily scale up to larger compute clusters when available. The declarative top-level interface used in these libraries means that even complex, fully interactive applications can be built and deployed as web services using only a few dozen lines of code, making it simple to create and share custom interactive applications even for datasets too large for most traditional GIS systems. The libraries we will cover include GeoViews (HoloViews extended for geographic applications) for declaring visualizable/plottable objects, Bokeh for building visual web applications from GeoViews objects, Datashader for rendering arbitrarily large datasets faithfully as fixed-size images, Param for specifying user-modifiable parameters that model your domain, Xarray for computing with n-dimensional array data, Dask for flexibly dispatching computational tasks across processors, and Numba for compiling array-based Python code down to fast machine code. We will show how to use the resulting workflow with static datasets and with simulators such as GSSHA or AdH, allowing you to deploy flexible, high-performance web-based dashboards for your GIS data or simulations without needing major investments in code development or maintenance.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Mastery motivation in children with complex communication needs: longitudinal data analysis.
Medeiros, Kara F; Cress, Cynthia J; Lambert, Matthew C
2016-09-01
This study compared longitudinal changes in mastery motivation during parent-child free play for 37 children with complex communication needs. Mastery motivation manifests as a willingness to work hard at tasks that are challenging, which is an important quality to overcoming the challenges involved in successful expressive communication using AAC. Unprompted parent-child play episodes were identified in three assessment sessions over an 18-month period and coded for nine categories of mastery motivation in social and object play. All of the object-oriented mastery motivation categories and one social mastery motivation category showed an influence of motor skills after controlling for receptive language. Object play elicited significantly more of all of the object-focused mastery motivation categories than social play, and social play elicited more of one type of social-focused mastery motivation behavior than object play. Mastery motivation variables did not differ significantly over time for children. Potential physical and interpersonal influences on mastery motivation for parents and children with complex communication needs are discussed, including broadening the procedures and definitions of mastery motivation beyond object-oriented measurements for children with complex communication needs.
Complex extreme learning machine applications in terahertz pulsed signals feature sets.
Yin, X-X; Hadjiloucas, S; Zhang, Y
2014-11-01
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M. Hope; Truex, Mike; Freshley, Mark
Complex sites are defined as those with difficult subsurface access, deep and/or thick zones of contamination, large areal extent, subsurface heterogeneities that limit the effectiveness of remediation, or where long-term remedies are needed to address contamination (e.g., because of long-term sources or large extent). The Test Area North at the Idaho National Laboratory, developed for nuclear fuel operations and heavy metal manufacturing, is used as a case study. Liquid wastes and sludge from experimental facilities were disposed in an injection well, which contaminated the subsurface aquifer located deep within fractured basalt. The wastes included organic, inorganic, and low-level radioactive constituents,more » with the focus of this case study on trichloroethylene. The site is used as an example of a systems-based framework that provides a structured approach to regulatory processes established for remediation under existing regulations. The framework is intended to facilitate remedy decisions and implementation at complex sites where restoration may be uncertain, require long timeframes, or involve use of adaptive management approaches. The framework facilitates site, regulator, and stakeholder interactions during the remedial planning and implementation process by using a conceptual model description as a technical foundation for decisions, identifying endpoints, which are interim remediation targets or intermediate decision points on the path to an ultimate end, and maintaining protectiveness during the remediation process. At the Test Area North, using a structured approach to implementing concepts in the endpoint framework, a three-component remedy is largely functioning as intended and is projected to meet remedial action objectives by 2095 as required. The remedy approach is being adjusted as new data become available. The framework provides a structured process for evaluating and adjusting the remediation approach, allowing site owners, regulators, and stakeholders to manage contamination at complex sites where adaptive remedies are needed.« less
NASA Technical Reports Server (NTRS)
Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt
2004-01-01
To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.
Long-Term Multiwavelength Studies of High-Redshift Blazar 0836+710
NASA Technical Reports Server (NTRS)
Thompson, D. J.; Akyuz, A.; Donato, D.; Perkins, J. S.; Larsson, S.; Sokolovsky, K.; Fuhrmann, L.; Kurtanidze, O.
2012-01-01
Following gamma-ray flaring activity of high-redshift (z=2.218) blazar 0836+710 in 2011, we have assembled a long-term multiwavelength study of this object. Although this source is monitored regularly by radio telescopes and the Fermi Large Area Telescope, its coverage at other wavelengths is limited. The optical flux appears generally correlated with the gamma-ray flux, while little variability has been seen at X-ray energies. The gamma-ray/radio correlation is complex compared to some other blazars. As for many blazars, the largest variability is seen at gamma-ray wavelengths.
VISIONS - Vista Star Formation Atlas
NASA Astrophysics Data System (ADS)
Meingast, Stefan; Alves, J.; Boui, H.; Ascenso, J.
2017-06-01
In this talk I will present the new ESO public survey VISIONS. Starting in early 2017 we will use the ESO VISTA survey telescope in a 550 h long programme to map the largest molecular cloud complexes within 500 pc in a multi-epoch program. The survey is optimized for measuring the proper motions of young stellar objects invisible to Gaia and mapping the cloud-structure with extinction. VISIONS will address a series of ISM topics ranging from the connection of dense cores to YSOs and the dynamical evolution of embedded clusters to variations in the reddening law on both small and large scales.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ning, E-mail: coolboy006@sohu.com; Zhang, Yingying; Xie, Jun
2014-10-13
We present a method to investigate large object by digital holography with effective spectrum multiplexing under single-exposure approach. This method splits the original reference beam and redirects one of its branches as a second object beam. Through the modified Mach-Zehnder interferometer, the two object beams can illuminate different parts of the large object and create a spectrum multiplexed hologram onto the focal plane array of the charge-coupled device/complementary metal oxide semiconductor camera. After correct spectrum extraction and image reconstruction, the large object can be fully observed within only one single snap-shot. The flexibility and great performance make our method amore » very attractive and promising technique for large object investigation under common 632.8 nm illumination.« less
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
NASA Astrophysics Data System (ADS)
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
A complex ligase ribozyme evolved in vitro from a group I ribozyme domain
NASA Technical Reports Server (NTRS)
Jaeger, L.; Wright, M. C.; Joyce, G. F.; Bada, J. L. (Principal Investigator)
1999-01-01
Like most proteins, complex RNA molecules often are modular objects made up of distinct structural and functional domains. The component domains of a protein can associate in alternative combinations to form molecules with different functions. These observations raise the possibility that complex RNAs also can be assembled from preexisting structural and functional domains. To test this hypothesis, an in vitro evolution procedure was used to isolate a previously undescribed class of complex ligase ribozymes, starting from a pool of 10(16) different RNA molecules that contained a constant region derived from a large structural domain that occurs within self-splicing group I ribozymes. Attached to this constant region were three hypervariable regions, totaling 85 nucleotides, that gave rise to the catalytic motif within the evolved catalysts. The ligase ribozymes catalyze formation of a 3',5'-phosphodiester linkage between adjacent template-bound oligonucleotides, one bearing a 3' hydroxyl and the other a 5' triphosphate. Ligation occurs in the context of a Watson-Crick duplex, with a catalytic rate of 0.26 min(-1) under optimal conditions. The constant region is essential for catalytic activity and appears to retain the tertiary structure of the group I ribozyme. This work demonstrates that complex RNA molecules, like their protein counterparts, can share common structural domains while exhibiting distinct catalytic functions.
Vegetation changes associated with a population irruption by Roosevelt elk
Starns, H D; Weckerly, Floyd W.; Ricca, Mark; Duarte, Adam
2015-01-01
Interactions between large herbivores and their food supply are central to the study of population dynamics. We assessed temporal and spatial patterns in meadow plant biomass over a 23-year period for meadow complexes that were spatially linked to three distinct populations of Roosevelt elk (Cervus elaphus roosevelti) in northwestern California. Our objectives were to determine whether the plant community exhibited a tolerant or resistant response when elk population growth became irruptive. Plant biomass for the three meadow complexes inhabited by the elk populations was measured using Normalized Difference Vegetation Index (NDVI), which was derived from Landsat 5 Thematic Mapper imagery. Elk populations exhibited different patterns of growth through the time series, whereby one population underwent a complete four-stage irruptive growth pattern while the other two did not. Temporal changes in NDVI for the meadow complex used by the irruptive population suggested a decline in forage biomass during the end of the dry season and a temporal decline in spatial variation of NDVI at the peak of plant biomass in May. Conversely, no such patterns were detected in the meadow complexes inhabited by the nonirruptive populations. Our findings suggest that the meadow complex used by the irruptive elk population may have undergone changes in plant community composition favoring plants that were resistant to elk grazing.
A Multiscale Vision Model applied to analyze EIT images of the solar corona
NASA Astrophysics Data System (ADS)
Portier-Fozzani, F.; Vandame, B.; Bijaoui, A.; Maucherat, A. J.; EIT Team
2001-07-01
The large dynamic range provided by the SOHO/EIT CCD (1 : 5000) is needed to observe the large EUV zoom of coronal structures from coronal homes up to flares. Histograms show that often a wide dynamic range is present in each image. Extracting hidden structures in the background level requires specific techniques such as the use of the Multiscale Vision Model (MVM, Bijaoui et al., 1998). This method, based on wavelet transformations optimizes detection of various size objects, however complex they may be. Bijaoui et al. built the Multiscale Vision Model to extract small dynamical structures from noise, mainly for studying galaxies. In this paper, we describe requirements for the use of this method with SOHO/EIT images (calibration, size of the image, dynamics of the subimage, etc.). Two different areas were studied revealing hidden structures: (1) classical coronal mass ejection (CME) formation and (2) a complex group of active regions with its evolution. The aim of this paper is to define carefully the constraints for this new method of imaging the solar corona with SOHO/EIT. Physical analysis derived from multi-wavelength observations will later complete these first results.
Multi-view L2-SVM and its multi-view core vector machine.
Huang, Chengquan; Chung, Fu-lai; Wang, Shitong
2016-03-01
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Brian; Kaplan, Daniel I; Arai, Yuji
2016-12-29
This university lead SBR project is a collaboration lead by Dr. Brian Powell (Clemson University) with co-principal investigators Dan Kaplan (Savannah River National Laboratory), Yuji Arai (presently at the University of Illinois), Udo Becker (U of Michigan) and Rod Ewing (presently at Stanford University). Hypothesis: The underlying hypothesis of this work is that strong interactions of plutonium with mineral surfaces are due to formation of inner sphere complexes with a limited number of high-energy surface sites, which results in sorption hysteresis where Pu(IV) is the predominant sorbed oxidation state. The energetic favorability of the Pu(IV) surface complex is strongly influencedmore » by positive sorption entropies, which are mechanistically driven by displacement of solvating water molecules from the actinide and mineral surface during sorption. Objectives: The overarching objective of this work is to examine Pu(IV) and Pu(V) sorption to pure metal (oxyhydr)oxide minerals and sediments using variable temperature batch sorption, X-ray absorption spectroscopy, electron microscopy, and quantum-mechanical and empirical-potential calculations. The data will be compiled into a self-consistent surface complexation model. The novelty of this effort lies largely in the manner the information from these measurements and calculations will be combined into a model that will be used to evaluate the thermodynamics of plutonium sorption reactions as well as predict sorption of plutonium to sediments from DOE sites using a component additivity approach.« less
Conceptual Modeling in Systems Biology Fosters Empirical Findings: The mRNA Lifecycle
Dori, Dov; Choder, Mordechai
2007-01-01
One of the main obstacles to understanding complex biological systems is the extent and rapid evolution of information, way beyond the capacity individuals to manage and comprehend. Current modeling approaches and tools lack adequate capacity to model concurrently structure and behavior of biological systems. Here we propose Object-Process Methodology (OPM), a holistic conceptual modeling paradigm, as a means to model both diagrammatically and textually biological systems formally and intuitively at any desired number of levels of detail. OPM combines objects, e.g., proteins, and processes, e.g., transcription, in a way that is simple and easily comprehensible to researchers and scholars. As a case in point, we modeled the yeast mRNA lifecycle. The mRNA lifecycle involves mRNA synthesis in the nucleus, mRNA transport to the cytoplasm, and its subsequent translation and degradation therein. Recent studies have identified specific cytoplasmic foci, termed processing bodies that contain large complexes of mRNAs and decay factors. Our OPM model of this cellular subsystem, presented here, led to the discovery of a new constituent of these complexes, the translation termination factor eRF3. Association of eRF3 with processing bodies is observed after a long-term starvation period. We suggest that OPM can eventually serve as a comprehensive evolvable model of the entire living cell system. The model would serve as a research and communication platform, highlighting unknown and uncertain aspects that can be addressed empirically and updated consequently while maintaining consistency. PMID:17849002
Kawa, Rafał; Pisula, Ewa
2010-01-01
There have been ambiguous accounts of exploration in children with intellectual disabilities with respect to the course of that exploration, and in particular the relationship between the features of explored objects and exploratory behaviour. It is unclear whether reduced exploratory activity seen with object exploration but not with locomotor activity is autism-specific or if it is also present in children with other disabilities. The purpose of the present study was to compare preschool children with autism with their peers with Down syndrome and typical development in terms of locomotor activity and object exploration and to determine whether the complexity of explored objects affects the course of exploration activity in children with autism. In total there were 27 children in the study. The experimental room was divided into three zones equipped with experimental objects providing visual stimulation of varying levels of complexity. Our results indicate that children with autism and Down syndrome differ from children with typical development in terms of some measures of object exploration (i.e. looking at objects) and time spent in the zone with the most visually complex objects.
Krishnan, Saloni; Leech, Robert; Mercure, Evelyne; Lloyd-Fox, Sarah; Dick, Frederic
2015-01-01
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity. PMID:24907249
Systematic analysis of barrier-forming FG hydrogels from Xenopus nuclear pore complexes
Labokha, Aksana A; Gradmann, Sabine; Frey, Steffen; Hülsmann, Bastian B; Urlaub, Henning; Baldus, Marc; Görlich, Dirk
2013-01-01
Nuclear pore complexes (NPCs) control the traffic between cell nucleus and cytoplasm. While facilitating translocation of nuclear transport receptors (NTRs) and NTR·cargo complexes, they suppress passive passage of macromolecules ⩾30 kDa. Previously, we reconstituted the NPC barrier as hydrogels comprising S. cerevisiae FG domains. We now studied FG domains from 10 Xenopus nucleoporins and found that all of them form hydrogels. Related domains with low FG motif density also substantially contribute to the NPC's hydrogel mass. We characterized all these hydrogels and observed the strictest sieving effect for the Nup98-derived hydrogel. It fully blocks entry of GFP-sized inert objects, permits facilitated entry of the small NTR NTF2, but arrests importin β-type NTRs at its surface. O-GlcNAc modification of the Nup98 FG domain prevented this arrest and allowed also large NTR·cargo complexes to enter. Solid-state NMR spectroscopy revealed that the O-GlcNAc-modified Nup98 gel lacks amyloid-like β-structures that dominate the rigid regions in the S. cerevisiae Nsp1 FG hydrogel. This suggests that FG hydrogels can assemble through different structural principles and yet acquire the same NPC-like permeability. PMID:23202855
NASA Technical Reports Server (NTRS)
Dill, Evan T.; Young, Steven D.
2015-01-01
In the constant drive to further the safety and efficiency of air travel, the complexity of avionics-related systems, and the procedures for interacting with these systems, appear to be on an ever-increasing trend. While this growing complexity often yields productive results with respect to system capabilities and flight efficiency, it can place a larger burden on pilots to manage increasing amounts of information and to understand intricate system designs. Evidence supporting this observation is becoming widespread, yet has been largely anecdotal or the result of subjective analysis. One way to gain more insight into this issue is through experimentation using more objective measures or indicators. This study utilizes and analyzes eye-tracking data obtained during a high-fidelity flight simulation study wherein many of the complexities of current flight decks, as well as those planned for the next generation air transportation system (NextGen), were emulated. The following paper presents the findings of this study with a focus on electronic flight bag (EFB) usage, system state awareness (SSA) and events involving suspected inattentional blindness (IB).
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
Komov, V T; Ivanova, E S; Poddubnaya, N Y; Gremyachikh, V A
2017-03-01
The characteristic properties of uptake and distribution of mercury in terrestrial ecosystems have received much lesser attention compared to aquatic particularly in Russia. Terrestrial ecosystems adjacent to large industrial manufactures-potential sources of mercury inflow into the environment frequently remain unstudied. This is the first report on mercury (Hg) levels in the basic elements of terrestrial ecosystems situated close to a large metallurgical complex.Mean values of mercury concentration (mg Hg/kg dry weight) in the vicinity of city of Cherepovets were the following: 0.056 ± 0.033-in the humus layer of soil; 0.556 ± 0.159-in earthworms; in the organs of voles Myodes glareolus (kidneys-0.021 ± 0.001; liver-0.014 ± 0.003; muscle-0.014 ± 0.001; brain-0.008 ± 0.002); in the organs of shrew Sorex araneus (kidneys-0.191 ± 0.016; liver-0.124 ± 0.011; muscle-0.108 ± 0.009; brain-0.065 ± 0.000). Correlation dependences between Hg content in soil and earthworms (r s = 0.85, p < 0.01) as well as soil and all studied shrews' organs (rs = 0.44-0.58; p ≤ 0.01) were found.The results obtained evidence for a strong trophic link in the bioaccumulation of Hg in terrestrial food webs. Despite the vicinity to a large metallurgical complex, mercury content in the studied objects was significantly lower than values of corresponding parameters in the soils and biota from industrial (polluted) areas of Great Britain, the USA, and China.
Deciding Which Way to Go: How Do Insects Alter Movements to Negotiate Barriers?
Ritzmann, Roy E.; Harley, Cynthia M.; Daltorio, Kathryn A.; Tietz, Brian R.; Pollack, Alan J.; Bender, John A.; Guo, Peiyuan; Horomanski, Audra L.; Kathman, Nicholas D.; Nieuwoudt, Claudia; Brown, Amy E.; Quinn, Roger D.
2012-01-01
Animals must routinely deal with barriers as they move through their natural environment. These challenges require directed changes in leg movements and posture performed in the context of ever changing internal and external conditions. In particular, cockroaches use a combination of tactile and visual information to evaluate objects in their path in order to effectively guide their movements in complex terrain. When encountering a large block, the insect uses its antennae to evaluate the object’s height then rears upward accordingly before climbing. A shelf presents a choice between climbing and tunneling that depends on how the antennae strike the shelf; tapping from above yields climbing, while tapping from below causes tunneling. However, ambient light conditions detected by the ocelli can bias that decision. Similarly, in a T-maze turning is determined by antennal contact but influenced by visual cues. These multi-sensory behaviors led us to look at the central complex as a center for sensori-motor integration within the insect brain. Visual and antennal tactile cues are processed within the central complex and, in tethered preparations, several central complex units changed firing rates in tandem with or prior to altered step frequency or turning, while stimulation through the implanted electrodes evoked these same behavioral changes. To further test for a central complex role in these decisions, we examined behavioral effects of brain lesions. Electrolytic lesions in restricted regions of the central complex generated site specific behavioral deficits. Similar changes were also found in reversible effects of procaine injections in the brain. Finally, we are examining these kinds of decisions made in a large arena that more closely matches the conditions under which cockroaches forage. Overall, our studies suggest that CC circuits may indeed influence the descending commands associated with navigational decisions, thereby making them more context dependent. PMID:22783160
Recent progress in 3-D imaging of sea freight containers
NASA Astrophysics Data System (ADS)
Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf
2015-03-01
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.
Directionally Interacting Spheres and Rods Form Ordered Phases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyan; Mahynski, Nathan A.; Gang, Oleg
The structures formed by mixtures of dissimilarly shaped nanoscale objects can significantly enhance our ability to produce nanoscale architectures. However, understanding their formation is a complex problem due to the interplay of geometric effects (entropy) and energetic interactions at the nanoscale. Spheres and rods are perhaps the most basic geometrical shapes and serve as convenient models of such dissimilar objects. The ordered phases formed by each of these individual shapes have already been explored, but, when mixed, spheres and rods have demonstrated only limited structural organization to date. We show using experiments and theory that the introduction of directional attractionsmore » between rod ends and isotropically interacting spherical nanoparticles (NPs) through DNA base pairing leads to the formation of ordered three-dimensional lattices. The spheres and rods arrange themselves in a complex alternating manner, where the spheres can form either a face-centered cubic (FCC) or hexagonal close-packed (HCP) lattice, or a disordered phase, as observed by in situ X-ray scattering. Increasing NP diameter at fixed rod length yields an initial transition from a disordered phase to the HCP crystal, energetically stabilized by rod-rod attraction across alternating crystal layers, as revealed by theory. In the limit of large NPs, the FCC structure is instead stabilized over the HCP by rod entropy. Thus, we propose that directionally specific attractions in mixtures of anisotropic and isotropic objects offer insight into unexplored self-assembly behavior of noncomplementary shaped particles.« less
Directionally Interacting Spheres and Rods Form Ordered Phases
Liu, Wenyan; Mahynski, Nathan A.; Gang, Oleg; ...
2017-05-10
The structures formed by mixtures of dissimilarly shaped nanoscale objects can significantly enhance our ability to produce nanoscale architectures. However, understanding their formation is a complex problem due to the interplay of geometric effects (entropy) and energetic interactions at the nanoscale. Spheres and rods are perhaps the most basic geometrical shapes and serve as convenient models of such dissimilar objects. The ordered phases formed by each of these individual shapes have already been explored, but, when mixed, spheres and rods have demonstrated only limited structural organization to date. We show using experiments and theory that the introduction of directional attractionsmore » between rod ends and isotropically interacting spherical nanoparticles (NPs) through DNA base pairing leads to the formation of ordered three-dimensional lattices. The spheres and rods arrange themselves in a complex alternating manner, where the spheres can form either a face-centered cubic (FCC) or hexagonal close-packed (HCP) lattice, or a disordered phase, as observed by in situ X-ray scattering. Increasing NP diameter at fixed rod length yields an initial transition from a disordered phase to the HCP crystal, energetically stabilized by rod-rod attraction across alternating crystal layers, as revealed by theory. In the limit of large NPs, the FCC structure is instead stabilized over the HCP by rod entropy. Thus, we propose that directionally specific attractions in mixtures of anisotropic and isotropic objects offer insight into unexplored self-assembly behavior of noncomplementary shaped particles.« less
A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors
Leigh, Simon J.; Bradley, Robert J.; Purssell, Christopher P.; Billson, Duncan R.; Hutchins, David A.
2012-01-01
3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes (‘rapid prototyping’) before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term ‘carbomorph’ and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes. PMID:23185319
Mobility Lab to Assess Balance and Gait with Synchronized Body-worn Sensors
Mancini, Martina; King, Laurie; Salarian, Arash; Holmstrom, Lars; McNames, James; Horak, Fay B
2014-01-01
This paper is a commentary to introduce how rehabilitation professionals can use a new, body-worn sensor system to obtain objective measures of balance and gait. Current assessments of balance and gait in clinical rehabilitation are largely limited to subjective scales, simple stop-watch measures, or complex, expensive machines not practical or largely available. Although accelerometers and gyroscopes have been shown to accurately quantify many aspects of gait and balance kinematics, only recently a comprehensive, portable system has become available for clinicians. By measuring body motion during tests that clinicians are already performing, such as the Timed Up and Go test (TUG) and the Clinical Test of Sensory Integration for Balance (CITSIB), the additional time for assessment is minimal. By providing instant analysis of balance and gait and comparing a patient’s performance to age-matched control values, therapists receive an objective, sensitive screening profile of balance and gait strategies. This motion screening profile can be used to identify mild abnormalities not obvious with traditional clinical testing, measure small changes due to rehabilitation, and design customized rehabilitation programs for each individual’s specific balance and gait deficits. PMID:24955286
NASA Astrophysics Data System (ADS)
Savorskiy, V.; Lupyan, E.; Balashov, I.; Burtsev, M.; Proshin, A.; Tolpin, V.; Ermakov, D.; Chernushich, A.; Panova, O.; Kuznetsov, O.; Vasilyev, V.
2014-04-01
Both development and application of remote sensing involves a considerable expenditure of material and intellectual resources. Therefore, it is important to use high-tech means of distribution of remote sensing data and processing results in order to facilitate access for as much as possible number of researchers. It should be accompanied with creation of capabilities for potentially more thorough and comprehensive, i.e. ultimately deeper, acquisition and complex analysis of information about the state of Earth's natural resources. As well objective need in a higher degree of Earth observation (EO) data assimilation is set by conditions of satellite observations, in which the observed objects are uncontrolled state. Progress in addressing this problem is determined to a large extent by order of the distributed EO information system (IS) functioning. Namely, it is largely dependent on reducing the cost of communication processes (data transfer) between spatially distributed IS nodes and data users. One of the most effective ways to improve the efficiency of data exchange processes is the creation of integrated EO IS optimized for running procedures of distributed data processing. The effective EO IS implementation should be based on specific software architecture.
Classification of foods by transferring knowledge from ImageNet dataset
NASA Astrophysics Data System (ADS)
Heravi, Elnaz J.; Aghdam, Hamed H.; Puig, Domenec
2017-03-01
Automatic classification of foods is a way to control food intake and tackle with obesity. However, it is a challenging problem since foods are highly deformable and complex objects. Results on ImageNet dataset have revealed that Convolutional Neural Network has a great expressive power to model natural objects. Nonetheless, it is not trivial to train a ConvNet from scratch for classification of foods. This is due to the fact that ConvNets require large datasets and to our knowledge there is not a large public dataset of food for this purpose. Alternative solution is to transfer knowledge from trained ConvNets to the domain of foods. In this work, we study how transferable are state-of-art ConvNets to the task of food classification. We also propose a method for transferring knowledge from a bigger ConvNet to a smaller ConvNet by keeping its accuracy similar to the bigger ConvNet. Our experiments on UECFood256 datasets show that Googlenet, VGG and residual networks produce comparable results if we start transferring knowledge from appropriate layer. In addition, we show that our method is able to effectively transfer knowledge to the smaller ConvNet using unlabeled samples.
Wu, Yan; Zheng, Xin; Xu, Xue-Gang; Li, Yuan-Hong; Wang, Bin; Gao, Xing-Hua; Chen, Hong-Duo; Yatskayer, Margarita; Oresajo, Christian
2013-04-01
The objective of the study was to investigate whether a topical antioxidant complex containing vitamins C and E and ferulic acid can protect solar-simulated ultraviolet irradiation (ssUVR)-induced acute photodamage in human skin. Twelve healthy female Chinese subjects were enrolled in this study. Four unexposed sites on dorsal skin were marked for the experiment. The products containing antioxidant complex and vehicle were applied onto 2 sites, respectively, for 4 consecutive days. On day 4, the antioxidant complex-treated site, the vehicle-treated site, and the untreated site (positive control) received ssUVR (5 times the minimal erythema dose). The fourth site (negative control) received neither ssUVR nor treatment. Digital photographs were taken, and skin color was measured pre- and postirradiation. Skin biopsies were obtained 24 hours after exposure to ssUVR, for hematoxylin and eosin and immunohistochemical staining. A single, 5 times the minimal erythema dose of ssUVR substantially induced large amounts of sunburn cell formation, thymine dimer formation, overexpression of p53 protein, and depletion of CD1a+ Langerhans cells. The antioxidant complex containing vitamins C and E and ferulic acid conferred significant protection against biological events compared with other irradiated sites. A topical antioxidant complex containing vitamins C and E and ferulic acid has potential photoprotective effects against ssUVR-induced acute photodamage in human skin.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
Robie, Alice A.; Straw, Andrew D.; Dickinson, Michael H.
2010-01-01
Walking fruit flies, Drosophila melanogaster, use visual information to orient towards salient objects in their environment, presumably as a search strategy for finding food, shelter or other resources. Less is known, however, about the role of vision or other sensory modalities such as mechanoreception in the evaluation of objects once they have been reached. To study the role of vision and mechanoreception in exploration behavior, we developed a large arena in which we could track individual fruit flies as they walked through either simple or more topologically complex landscapes. When exploring a simple, flat environment lacking three-dimensional objects, flies used visual cues from the distant background to stabilize their walking trajectories. When exploring an arena containing an array of cones, differing in geometry, flies actively oriented towards, climbed onto, and explored the objects, spending most of their time on the tallest, steepest object. A fly's behavioral response to the geometry of an object depended upon the intrinsic properties of each object and not a relative assessment to other nearby objects. Furthermore, the preference was not due to a greater attraction towards tall, steep objects, but rather a change in locomotor behavior once a fly reached and explored the surface. Specifically, flies are much more likely to stop walking for long periods when they are perched on tall, steep objects. Both the vision system and the antennal chordotonal organs (Johnston's organs) provide sufficient information about the geometry of an object to elicit the observed change in locomotor behavior. Only when both these sensory systems were impaired did flies not show the behavioral preference for the tall, steep objects. PMID:20581279
Identification challenges for large space structures
NASA Technical Reports Server (NTRS)
Pappa, Richard S.
1990-01-01
The paper examines the on-orbit modal identification of large space structures, stressing the importance of planning and experience, in preparation for the Space Station Structural Characterization Experiment (SSSCE) for the Space Station Freedom. The necessary information to foresee and overcome practical difficulties is considered in connection with seven key factors, including test objectives, dynamic complexity of the structure, data quality, extent of exploratory studies, availability and understanding of software tools, experience with similar problems, and pretest analytical conditions. These factors affect identification success in ground tests. Comparisons with similar ground tests of assembled systems are discussed, showing that the constraints of space tests make these factors more significant. The absence of data and experiences relating to on-orbit modal identification testing is shown to make identification a uniquely mathematical problem, although all spacecraft are constructed and verified by proven engineering methods.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Nela, Luca; Tang, Jianshi; Cao, Qing; Tulevski, George; Han, Shu-Jen
2018-03-14
Artificial "electronic skin" is of great interest for mimicking the functionality of human skin, such as tactile pressure sensing. Several important performance metrics include mechanical flexibility, operation voltage, sensitivity, and accuracy, as well as response speed. In this Letter, we demonstrate a large-area high-performance flexible pressure sensor built on an active matrix of 16 × 16 carbon nanotube thin-film transistors (CNT TFTs). Made from highly purified solution tubes, the active matrix exhibits superior flexible TFT performance with high mobility and large current density, along with a high device yield of nearly 99% over 4 inch sample area. The fully integrated flexible pressure sensor operates within a small voltage range of 3 V and shows superb performance featuring high spatial resolution of 4 mm, faster response than human skin (<30 ms), and excellent accuracy in sensing complex objects on both flat and curved surfaces. This work may pave the road for future integration of high-performance electronic skin in smart robotics and prosthetic solutions.
Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.
Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M
2015-08-01
Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.
Knowledge-based simulation using object-oriented programming
NASA Technical Reports Server (NTRS)
Sidoran, Karen M.
1993-01-01
Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.
Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.
Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee
2017-07-01
We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.
Rupp, Kyle; Roos, Matthew; Milsap, Griffin; Caceres, Carlos; Ratto, Christopher; Chevillet, Mark; Crone, Nathan E; Wolmetz, Michael
2017-03-01
Non-invasive neuroimaging studies have shown that semantic category and attribute information are encoded in neural population activity. Electrocorticography (ECoG) offers several advantages over non-invasive approaches, but the degree to which semantic attribute information is encoded in ECoG responses is not known. We recorded ECoG while patients named objects from 12 semantic categories and then trained high-dimensional encoding models to map semantic attributes to spectral-temporal features of the task-related neural responses. Using these semantic attribute encoding models, untrained objects were decoded with accuracies comparable to whole-brain functional Magnetic Resonance Imaging (fMRI), and we observed that high-gamma activity (70-110Hz) at basal occipitotemporal electrodes was associated with specific semantic dimensions (manmade-animate, canonically large-small, and places-tools). Individual patient results were in close agreement with reports from other imaging modalities on the time course and functional organization of semantic processing along the ventral visual pathway during object recognition. The semantic attribute encoding model approach is critical for decoding objects absent from a training set, as well as for studying complex semantic encodings without artificially restricting stimuli to a small number of semantic categories. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team
2018-01-01
The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio structures, from simple to more complex, giving rise to a variety of classification schemes. The question which still remains open, given deeper surveys revealing new populations of radio sources, is whether this plethora of radio structures can be attributed to the physical properties of the host or to the environment. Here we present an analysis on the radio structure of radio-selected AGN from the VLA-COSMOS Large Project at 3 GHz (JVLA-COSMOS; Smolčić et al.) in relation to: 1) their linear projected size, 2) the Eddington ratio, and 3) the environment their hosts lie within. We classify these as FRI (jet-like) and FRII (lobe-like) based on the FR-type classification scheme, and compare them to a sample of jet-less radio AGN in JVLA-COSMOS. We measure their linear projected sizes using a semi-automatic machine learning technique. Their Eddington ratios are calculated from X-ray data available for COSMOS. As environmental probes we take the X-ray groups (hundreds kpc) and the density fields (~Mpc-scale) in COSMOS. We find that FRII radio sources are on average larger than FRIs, which agrees with literature. But contrary to past studies, we find no dichotomy in FR objects in JVLA-COSMOS given their Eddington ratios, as on average they exhibit similar values. Furthermore our results show that the large-scale environment does not explain the observed dichotomy in lobe- and jet-like FR-type objects as both types are found on similar environments, but it does affect the shape of the radio structure introducing bents for objects closer to the centre of an X-ray group.
Environmental hazards and stress: evidence from the Texas City Stress and Health Study
Peek, MK; Cutchin, MP; Freeman, D; Stowe, RP; Goodwin, JS
2013-01-01
Background Substantial research has suggested that exposure to environmental health hazards, such as polluting industrial activity, has deleterious effects on psychological and physiological well-being. However, one gap in the existing literature is comparative analysis of objective and subjective exposure's relative association with various measurable outcomes of exposure. Methods These relationships were explored within a community sample of 2604 respondents living near a large petrochemical complex in Texas City, Texas, USA. Objective exposure was investigated using distance of residence from a cluster of petrochemical plants and subjective exposure using residents' concern about potential health effects from those plants. Regression models were then used to examine how each type of exposure predicts perceived stress, physiological markers of stress and perceived health. Results Results suggest that objective exposure was associated primarily with markers of physiological stress (interleukin-6 and viral reactivation), and subjective exposure (concern about petrochemical health risk) was associated with variables assessing perceived health. Conclusions From the analysis, it can be inferred that, in the context of an environmental hazard of this type, subjective exposure may be at least as important a predictor of poor health outcomes as objective exposure. PMID:19282316
Study of Pressure Oscillations in Supersonic Parachute
NASA Astrophysics Data System (ADS)
Dahal, Nimesh; Fukiba, Katsuyoshi; Mizuta, Kazuki; Maru, Yusuke
2018-04-01
Supersonic parachutes are a critical element of planetary mission whose simple structure, light-weight characteristics together with high ratio of aerodynamic drag makes them the most suitable aerodynamic decelerators. The use of parachute in supersonic flow produces complex shock/shock and wake/shock interaction giving rise to dynamic pressure oscillations. The study of supersonic parachute is difficult, because parachute has very flexible structure which makes obtaining experimental pressure data difficult. In this study, a supersonic wind tunnel test using two rigid bodies is done. The wind tunnel test was done at Mach number 3 by varying the distance between the front and rear objects, and the distance of a bundle point which divides suspension lines and a riser. The analysis of Schlieren movies revealed shock wave oscillation which was repetitive and had large pressure variation. The pressure variation differed in each case of change in distance between the front and rear objects, and the change in distance between riser and the rear object. The causes of pressure oscillation are: interaction of wake caused by front object with the shock wave, fundamental harmonic vibration of suspension lines, interference between shock waves, and the boundary layer of suspension lines.
A resource oriented webs service for environmental modeling
NASA Astrophysics Data System (ADS)
Ferencik, Ioan
2013-04-01
Environmental modeling is a largely adopted practice in the study of natural phenomena. Environmental models can be difficult to build and use and thus sharing them within the community is an important aspect. The most common approach to share a model is to expose it as a web service. In practice the interaction with this web service is cumbersome due to lack of standardized contract and the complexity of the model being exposed. In this work we investigate the use of a resource oriented approach in exposing environmental models as web services. We view a model as a layered resource build atop the object concept from Object Oriented Programming, augmented with persistence capabilities provided by an embedded object database to keep track of its state and implementing the four basic principles of resource oriented architectures: addressability, statelessness, representation and uniform interface. For implementation we use exclusively open source software: Django framework, dyBase object oriented database and Python programming language. We developed a generic framework of resources structured into a hierarchy of types and consequently extended this typology with recurses specific to the domain of environmental modeling. To test our web service we used cURL, a robust command-line based web client.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Controlling uncertainty: a review of human behavior in complex dynamic environments.
Osman, Magda
2010-01-01
Complex dynamic control (CDC) tasks are a type of problem-solving environment used for examining many cognitive activities (e.g., attention, control, decision making, hypothesis testing, implicit learning, memory, monitoring, planning, and problem solving). Because of their popularity, there have been many findings from diverse domains of research (economics, engineering, ergonomics, human-computer interaction, management, psychology), but they remain largely disconnected from each other. The objective of this article is to review theoretical developments and empirical work on CDC tasks, and to introduce a novel framework (monitoring and control framework) as a tool for integrating theory and findings. The main thesis of the monitoring and control framework is that CDC tasks are characteristically uncertain environments, and subjective judgments of uncertainty guide the way in which monitoring and control behaviors attempt to reduce it. The article concludes by discussing new insights into continuing debates and future directions for research on CDC tasks.
Analysis of The Surface Radiative Budget Using ATLAS Data for San Juan, Puerto Rico
NASA Technical Reports Server (NTRS)
Luvall, Jeffrey C.; Rickman, D. L.; Gonzalez, J.; Comarazamy, Daniel; Picon, Ana
2007-01-01
The additional beating of the air over the city is the result of the replacement of naturally vegetated surfaces with those composed of asphalt, concrete, rooftops and other man-made materials. The temperatures of these artificial surfaces can be 20 to 40 C higher than vegetated surfaces. This produces a dome of elevated air temperatures 5 to 8 C greater over the city, compared to the air temperatures over adjacent rural areas. Urban landscapes are a complex mixture of vegetated and nonvegetated surfaces. It is difficult to take enough temperature measurements over a large city area to characterize the complexity of urban radiant surface temperature variability. The NASA Airborne Thermal and Land Applications Sensor (ATLAS) operates in the visual and IR bands was used in February 2004 to collect data from San Juan, Puerto Rico with the main objective of investigating the Urban Heat Island (UHI) in tropical cities.
NASA's Space Launch System (SLS) Program: Mars Program Utilization
NASA Technical Reports Server (NTRS)
May, Todd A.; Creech, Stephen D.
2012-01-01
NASA's Space Launch System is being designed for safe, affordable, and sustainable human and scientific exploration missions beyond Earth's orbit (BEO), as directed by the NASA Authorization Act of 2010 and NASA's 2011 Strategic Plan. This paper describes how the SLS can dramatically change the Mars program's science and human exploration capabilities and objectives. Specifically, through its high-velocity change (delta V) and payload capabilities, SLS enables Mars science missions of unprecedented size and scope. By providing direct trajectories to Mars, SLS eliminates the need for complicated gravity-assist missions around other bodies in the solar system, reducing mission time, complexity, and cost. SLS's large payload capacity also allows for larger, more capable spacecraft or landers with more instruments, which can eliminate the need for complex packaging or "folding" mechanisms. By offering this capability, SLS can enable more science to be done more quickly than would be possible through other delivery mechanisms using longer mission times.
Directed Evolution as a Powerful Synthetic Biology Tool
Cobb, Ryan E.; Sun, Ning; Zhao, Huimin
2012-01-01
At the heart of synthetic biology lies the goal of rationally engineering a complete biological system to achieve a specific objective, such as bioremediation and synthesis of a valuable drug, chemical, or biofuel molecule. However, the inherent complexity of natural biological systems has heretofore precluded generalized application of this approach. Directed evolution, a process which mimics Darwinian selection on a laboratory scale, has allowed significant strides to be made in the field of synthetic biology by allowing rapid identification of desired properties from large libraries of variants. Improvement in biocatalyst activity and stability, engineering of biosynthetic pathways, tuning of functional regulatory systems and logic circuits, and development of desired complex phenotypes in industrial host organisms have all been achieved by way of directed evolution. Here, we review recent contributions of directed evolution to synthetic biology at the protein, pathway, network, and whole cell levels. PMID:22465795
Xu, Shen; Rogers, Toby; Fairweather, Elliot; Glenn, Anthony; Curran, James; Curcin, Vasa
2018-01-01
Data provenance is a technique that describes the history of digital objects. In health data settings, it can be used to deliver auditability and transparency, and to achieve trust in a software system. However, implementing data provenance in analytics software at an enterprise level presents a different set of challenges from the research environments where data provenance was originally devised. In this paper, the challenges of reporting provenance information to the user is presented. Provenance captured from analytics software can be large and complex and visualizing a series of tasks over a long period can be overwhelming even for a domain expert, requiring visual aggregation mechanisms that fit with complex human cognitive activities involved in the process. This research studied how provenance-based reporting can be integrated into a health data analytics software, using the example of Atmolytics visual reporting tool. PMID:29888084
Label propagation algorithm for community detection based on node importance and label influence
NASA Astrophysics Data System (ADS)
Zhang, Xian-Kun; Ren, Jing; Song, Chen; Jia, Jia; Zhang, Qian
2017-09-01
Recently, the detection of high-quality community has become a hot spot in the research of social network. Label propagation algorithm (LPA) has been widely concerned since it has the advantages of linear time complexity and is unnecessary to define objective function and the number of community in advance. However, LPA has the shortcomings of uncertainty and randomness in the label propagation process, which affects the accuracy and stability of the community. For large-scale social network, this paper proposes a novel label propagation algorithm for community detection based on node importance and label influence (LPA_NI). The experiments with comparative algorithms on real-world networks and synthetic networks have shown that LPA_NI can significantly improve the quality of community detection and shorten the iteration period. Also, it has better accuracy and stability in the case of similar complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kounkel, Marina; Hartmann, Lee; Loinard, Laurent
We present the results of the Gould’s Belt Distances Survey of young star-forming regions toward the Orion Molecular Cloud Complex. We detected 36 young stellar objects (YSOs) with the Very Large Baseline Array, 27 of which have been observed in at least three epochs over the course of two years. At least half of these YSOs belong to multiple systems. We obtained parallax and proper motions toward these stars to study the structure and kinematics of the Complex. We measured a distance of 388 ± 5 pc toward the Orion Nebula Cluster, 428 ± 10 pc toward the southern portion L1641, 388 ± 10 pc towardmore » NGC 2068, and roughly ∼420 pc toward NGC 2024. Finally, we observed a strong degree of plasma radio scattering toward λ Ori.« less
BioStar models of clinical and genomic data for biomedical data warehouse design
Wang, Liangjiang; Ramanathan, Murali
2008-01-01
Biomedical research is now generating large amounts of data, ranging from clinical test results to microarray gene expression profiles. The scale and complexity of these datasets give rise to substantial challenges in data management and analysis. It is highly desirable that data warehousing and online analytical processing technologies can be applied to biomedical data integration and mining. The major difficulty probably lies in the task of capturing and modelling diverse biological objects and their complex relationships. This paper describes multidimensional data modelling for biomedical data warehouse design. Since the conventional models such as star schema appear to be insufficient for modelling clinical and genomic data, we develop a new model called BioStar schema. The new model can capture the rich semantics of biomedical data and provide greater extensibility for the fast evolution of biological research methodologies. PMID:18048122
Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M A; Bertram, M; Porumbescu, S
2001-10-03
Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitablemore » approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.« less
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Online Sentence Reading in People With Aphasia: Evidence From Eye Tracking
Knilans, Jessica
2015-01-01
Purpose There is a lot of evidence that people with aphasia have more difficulty understanding structurally complex sentences (e.g., object clefts) than simpler sentences (subject clefts). However, subject clefts also occur more frequently in English than object clefts. Thus, it is possible that both structural complexity and frequency affect how people with aphasia understand these structures. Method Nine people with aphasia and 8 age-matched controls participated in the study. The stimuli consisted of 24 object cleft and 24 subject cleft sentences. The task was eye tracking during reading, which permits a more fine-grained analysis of reading performance than measures such as self-paced reading. Results As expected, controls had longer reading times for critical regions in object cleft sentences compared with subject cleft sentences. People with aphasia showed the predicted effects of structural frequency. Effects of structural complexity in people with aphasia did not emerge on their first pass through the sentence but were observed when they were rereading critical regions of complex sentences. Conclusions People with aphasia are sensitive to both structural complexity and structural frequency when reading. However, people with aphasia may use different reading strategies than controls when confronted with relatively infrequent and complex sentence structures. PMID:26383779
Online Sentence Reading in People With Aphasia: Evidence From Eye Tracking.
Knilans, Jessica; DeDe, Gayle
2015-11-01
There is a lot of evidence that people with aphasia have more difficulty understanding structurally complex sentences (e.g., object clefts) than simpler sentences (subject clefts). However, subject clefts also occur more frequently in English than object clefts. Thus, it is possible that both structural complexity and frequency affect how people with aphasia understand these structures. Nine people with aphasia and 8 age-matched controls participated in the study. The stimuli consisted of 24 object cleft and 24 subject cleft sentences. The task was eye tracking during reading, which permits a more fine-grained analysis of reading performance than measures such as self-paced reading. As expected, controls had longer reading times for critical regions in object cleft sentences compared with subject cleft sentences. People with aphasia showed the predicted effects of structural frequency. Effects of structural complexity in people with aphasia did not emerge on their first pass through the sentence but were observed when they were rereading critical regions of complex sentences. People with aphasia are sensitive to both structural complexity and structural frequency when reading. However, people with aphasia may use different reading strategies than controls when confronted with relatively infrequent and complex sentence structures.
Popularity and user diversity of online objects
NASA Astrophysics Data System (ADS)
Wang, Jia-Hua; Guo, Qiang; Yang, Kai; Zhang, Yi-Lu; Han, Jingti; Liu, Jian-Guo
2016-11-01
The popularity has been widely used to describe the object property of online user-object bipartite networks regardless of the user characteristics. In this paper, we introduce a measurement namely user diversity to measure diversity of users who select or rate one type of objects by using the information entropy. We empirically calculate the user diversity of objects with specific degree for both MovieLens and Diggs data sets. The results indicate that more types of users select normal-degree objects than those who select large-degree and small-degree objects. Furthermore, small-degree objects are usually selected by large-degree users while large-degree objects are usually selected by small-degree users. Moreover, we define 15% objects of smallest degrees as unpopular objects and 10% ones of largest degrees as popular objects. The timestamp is introduced to help further analyze the evolution of user diversity of popular objects and unpopular objects. The dynamic analysis shows that as objects become popular gradually, they are more likely accepted by small-degree users but lose attention among the large-degree users.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Photoinduced energy transfer in transition metal complex oligomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-06-01
The work done over the past three years has been directed toward the preparation, characterization and photophysical examination of mono- and bimetallic diimine complexes. The work is part of a broader project directed toward the development of stable, efficient, light harvesting arrays of transition metal complex chromophores. One focus has been the synthesis of rigid bis-bidentate and bis-tridentate bridging ligands. The authors have managed to make the ligand bphb in multigram quantities from inexpensive starting materials. The synthetic approach used has allowed them to prepare a variety of other ligands which may have unique applications (vide infra). They have prepared,more » characterized and examined the photophysical behavior of Ru(II) and Re(I) complexes of the ligands. Energy donor/acceptor complexes of bphb have been prepared which exhibit nearly activationless energy transfer. Complexes of Ru(II) and Re(I) have also been prepared with other polyunsaturated ligands in which two different long lived (> 50 ns) excited states exist; results of luminescence and transient absorbance measurements suggest the two states are metal-to-ligand charge transfer and ligand localized {pi}{r_arrow}{pi}* triplets. Finally, the authors have developed methods to prepare polymetallic complexes which are covalently bound to various surfaces. The long term objective of this work is to make light harvesting arrays for the sensitization of large band gap semiconductors. Details of this work are provided in the body of the report.« less
ERIC Educational Resources Information Center
Liesefeld, Heinrich René; Fu, Xiaolan; Zimmer, Hubert D.
2015-01-01
A major debate in the mental-rotation literature concerns the question of whether objects are represented holistically during rotation. Effects of object complexity on rotational speed are considered strong evidence against such holistic representations. In Experiment 1, such an effect of object complexity was markedly present. A closer look on…
NASA Astrophysics Data System (ADS)
Collmar, M.; Cook, B. G.; Cowart, R.; Freund, D.; Gavin, J.
2015-10-01
A pool of 240 subjects was exposed to a library of waveforms consisting of example signatures of low boom aircraft. The signature library included intentional variations in both loudness and spectral content, and were auralized using the Gulfstream SASS-II sonic boom simulator. Post-processing was used to quantify the impacts of test design decisions on the "quality" of the resultant database. Specific lessons learned from this study include insight regarding potential for bias error due to variations in loudness or peak over-pressure, sources of uncertainty and their relative importance on objective measurements and robustness of individual metrics to wide variations in spectral content. Results provide clear guidance for design of future large scale community surveys, where one must optimize the complex tradeoffs between the size of the surveyed population, spatial footprint of those participants, and the fidelity/density of objective measurements.
Wang, Jing; Cherkassky, Vladimir L; Yang, Ying; Chang, Kai-Min Kevin; Vargas, Robert; Diana, Nicholas; Just, Marcel Adam
2016-01-01
The generativity and complexity of human thought stem in large part from the ability to represent relations among concepts and form propositions. The current study reveals how a given object such as rabbit is neurally encoded differently and identifiably depending on whether it is an agent ("the rabbit punches the monkey") or a patient ("the monkey punches the rabbit"). Machine-learning classifiers were trained on functional magnetic resonance imaging (fMRI) data evoked by a set of short videos that conveyed agent-verb-patient propositions. When tested on a held-out video, the classifiers were able to reliably identify the thematic role of an object from its associated fMRI activation pattern. Moreover, when trained on one subset of the study participants, classifiers reliably identified the thematic roles in the data of a left-out participant (mean accuracy = .66), indicating that the neural representations of thematic roles were common across individuals.
Reduction of Subjective and Objective System Complexity
NASA Technical Reports Server (NTRS)
Watson, Michael D.
2015-01-01
Occam's razor is often used in science to define the minimum criteria to establish a physical or philosophical idea or relationship. Albert Einstein is attributed the saying "everything should be made as simple as possible, but not simpler". These heuristic ideas are based on a belief that there is a minimum state or set of states for a given system or phenomena. In looking at system complexity, these heuristics point us to an idea that complexity can be reduced to a minimum. How then, do we approach a reduction in complexity? Complexity has been described as a subjective concept and an objective measure of a system. Subjective complexity is based on human cognitive comprehension of the functions and inter relationships of a system. Subjective complexity is defined by the ability to fully comprehend the system. Simplifying complexity, in a subjective sense, is thus gaining a deeper understanding of the system. As Apple's Jonathon Ive has stated," It's not just minimalism or the absence of clutter. It involves digging through the depth of complexity. To be truly simple, you have to go really deep". Simplicity is not the absence of complexity but a deeper understanding of complexity. Subjective complexity, based on this human comprehension, cannot then be discerned from the sociological concept of ignorance. The inability to comprehend a system can be either a lack of knowledge, an inability to understand the intricacies of a system, or both. Reduction in this sense is based purely on a cognitive ability to understand the system and no system then may be truly complex. From this view, education and experience seem to be the keys to reduction or eliminating complexity. Objective complexity, is the measure of the systems functions and interrelationships which exist independent of human comprehension. Jonathon Ive's statement does not say that complexity is removed, only that the complexity is understood. From this standpoint, reduction of complexity can be approached in finding the optimal or 'best balance' of the system functions and interrelationships. This is achievable following von Bertalanffy's approach of describing systems as a set of equations representing both the system functions and the system interrelationships. Reduction is found based on an objective function defining the system output given variations in the system inputs and the system operating environment. By minimizing the objective function with respect to these inputs and environments, a reduced system can be found. Thus, a reduction of the system complexity is feasible.
Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries
2013-04-01
Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.
Tian, YingLi; Yang, Xiaodong; Yi, Chucai; Arditi, Aries
2012-01-01
Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. PMID:23630409
A modeling process to understand complex system architectures
NASA Astrophysics Data System (ADS)
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two architectures is better more than 98% of the time. The second objective led to Hypothesis B, the second of the main hypotheses. This hypothesis stated that by studying the functional relations, the most critical entities composing the architecture could be identified. The critical entities are those that when their behavior varies slightly, the behavior of the overall architecture varies greatly. These are the entities that must be modeled more carefully and where modeling effort should be expended. This hypothesis was tested by simplifying agent-based models to the non-trivial minimum, and executing a large number of different simulations in order to obtain statistically significant results. The tests were conducted by evolving the complex model without any error induced, and then evolving the model once again for each ranking and assigning error to any of the nodes with a probability inversely proportional to the ranking. The results from this hypothesis test indicate that depending on the structural characteristics of the functional relations, it is useful to use one of two of the intelligent rankings tested, or it is best to expend effort equally amongst all the entities. Random ranking always performed worse than uniform ranking, indicating that if modeling effort is to be prioritized amongst the entities composing the large-scale system architecture, it should be prioritized intelligently. The benefit threshold between intelligent prioritization and no prioritization lays on the large-scale system's chaotic boundary. If the large-scale system behaves chaotically, small variations in any of the entities tends to have a great impact on the behavior of the entire system. Therefore, even low ranking entities can still affect the behavior of the model greatly, and error should not be concentrated in any one entity. It was discovered that the threshold can be identified from studying the structure of the networks, in particular the cyclicity, the Off-diagonal Complexity, and the Digraph Algebraic Connectivity. (Abstract shortened by UMI.)
SYSTEMATIC PROCEDURE FOR DESIGNING PROCESSES WITH MULTIPLE ENVIRONMENTAL OBJECTIVES
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems, due to the complex nature of the problems, the need for complex assessments, and complicated ...
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
Numerical propulsion system simulation
NASA Technical Reports Server (NTRS)
Lytle, John K.; Remaklus, David A.; Nichols, Lester D.
1990-01-01
The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.
Hansen, Matthew; O’Brien, Kerth; Meckler, Garth; Chang, Anna Marie; Guise, Jeanne-Marie
2016-01-01
Mixed methods research has significant potential to broaden the scope of emergency care and specifically emergency medical services investigation. Mixed methods studies involve the coordinated use of qualitative and quantitative research approaches to gain a fuller understanding of practice. By combining what is learnt from multiple methods, these approaches can help to characterise complex healthcare systems, identify the mechanisms of complex problems such as medical errors and understand aspects of human interaction such as communication, behaviour and team performance. Mixed methods approaches may be particularly useful for out-of-hospital care researchers because care is provided in complex systems where equipment, interpersonal interactions, societal norms, environment and other factors influence patient outcomes. The overall objectives of this paper are to (1) introduce the fundamental concepts and approaches of mixed methods research and (2) describe the interrelation and complementary features of the quantitative and qualitative components of mixed methods studies using specific examples from the Children’s Safety Initiative-Emergency Medical Services (CSI-EMS), a large National Institutes of Health-funded research project conducted in the USA. PMID:26949970
NASA/University Joint Venture in Space Science (JOVE)
NASA Technical Reports Server (NTRS)
Gottesman, Stephen T.
1997-01-01
This system has an immense complex of optical knots that extend several galactic diameters to the north and south of the main optical object. These are star forming regions, some of which are the size of small irregular galaxies. It has a nearby companion called the 'seashell' owing to its disturbed appearance. The data had been reduced and images formed; a figure is attached. The high resolution observations show that the atomic hydrogen (HI) encompasses not only the N-S complex of optical knots but it forms an incomplete ring or tail that extends approximately 3 arcmins to the west. The seashell was not detected, and the HI associated with NGC 5291 itself shows a very large velocity range. The formation mechanism for this disturbed and distorted complex is unclear. X-ray emission suggesting ram sweeping is also observed. This author favors an explanation involving an interaction between the two components, NGC 5291 and the seashell. We are witnessing the formation of tidal tails and bridges between the galaxies and the associated ejecta. Ram sweeping occurs as the system moves bodily through the medium of the cluster of galaxies, Abell 3574, to which NGC 5291 et al. belong. There are numerous concentrations of HI, mostly along the N-S star forming complexes which generally coincide with the optical knots; the larger features contain several x109 solar mass, again the magnitude of a small irregular galaxy. Each knot was compared to a set of criteria designed to test if the feature was stable against its own internal kinetic energy, and stable against the tidal forces of the host galaxy. At least one of the objects (Knot B) appears to be a bound system suggesting that it is a genuinely young dwarf irregular galaxy that has evolved from the material associated with his interacting complex. We conclude that we are witnessing the early evolution of young galaxies and that NGC 5291 and the seashell are a nursery.
Halas, Nancy J.; Nordlander, Peter; Neumann, Oara
2017-01-17
A system including a steam generation system and a chamber. The steam generation system includes a complex and the steam generation system is configured to receive water, concentrate electromagnetic (EM) radiation received from an EM radiation source, apply the EM radiation to the complex, where the complex absorbs the EM radiation to generate heat, and transform, using the heat generated by the complex, the water to steam. The chamber is configured to receive the steam and an object, wherein the object is of medical waste, medical equipment, fabric, and fecal matter.
Halas, Nancy J.; Nordlander, Peter; Neumann, Oara
2015-12-29
A system including a steam generation system and a chamber. The steam generation system includes a complex and the steam generation system is configured to receive water, concentrate electromagnetic (EM) radiation received from an EM radiation source, apply the EM radiation to the complex, where the complex absorbs the EM radiation to generate heat, and transform, using the heat generated by the complex, the water to steam. The chamber is configured to receive the steam and an object, wherein the object is of medical waste, medical equipment, fabric, and fecal matter.
Optimal reentry prediction of space objects from LEO using RSM and GA
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
2012-07-01
The accurate estimation of the orbital life time (OLT) of decaying near-Earth objects is of considerable importance for the prediction of risk object re-entry time and hazard assessment as well as for mitigation strategies. Recently, due to the reentries of large number of risk objects, which poses threat to the human life and property, a great concern is developed in the space scientific community all over the World. The evolution of objects in Low Earth Orbit (LEO) is determined by a complex interplay of the perturbing forces, mainly due to atmospheric drag and Earth gravity. These orbits are mostly in low eccentric (eccentricity < 0.2) and have variations in perigee and apogee altitudes due to perturbations during a revolution. The changes in the perigee and apogee altitudes of these orbits are mainly due to the gravitational perturbations of the Earth and the atmospheric density. It has become necessary to use extremely complex force models to match with the present operational requirements and observational techniques. Further the re-entry time of the objects in such orbits is sensitive to the initial conditions. In this paper the problem of predicting re-entry time is attempted as an optimal estimation problem. It is known that the errors are more in eccentricity for the observations based on two line elements (TLEs). Thus two parameters, initial eccentricity and ballistic coefficient, are chosen for optimal estimation. These two parameters are computed with response surface method (RSM) using a genetic algorithm (GA) for the selected time zones, based on rough linear variation of response parameter, the mean semi-major axis during orbit evolution. Error minimization between the observed and predicted mean Semi-major axis is achieved by the application of an optimization algorithm such as Genetic Algorithm (GA). The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and eccentricity. The methodology is tested with the recently reentered objects ROSAT and PHOBOS GRUNT satellites. The study reveals a good agreement with the actual reentry time of these objects. It is also observed that the absolute percentage error in re-entry prediction time for all the two objects is found to be very less. Keywords: low eccentric, Response surface method, Genetic algorithm, apogee altitude, Ballistic coefficient
Formation and Recondensation of Complex Organic Molecules During Protostellar Luminosity Outbursts
NASA Technical Reports Server (NTRS)
Taquet, Vianney; Wirstrom, Eva S.; Charnley, Steven B.
2016-01-01
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The model calculations presented here demonstrate that ion-molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.
Formation and Recondensation of Complex Organic Molecules during Protostellar Luminosity Outbursts
NASA Astrophysics Data System (ADS)
Taquet, Vianney; Wirström, Eva S.; Charnley, Steven B.
2016-04-01
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The model calculations presented here demonstrate that ion-molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.
Large eddy simulation of flows in industrial compressors: a path from 2015 to 2035
Gourdain, N.; Sicot, F.; Duchaine, F.; Gicquel, L.
2014-01-01
A better understanding of turbulent unsteady flows is a necessary step towards a breakthrough in the design of modern compressors. Owing to high Reynolds numbers and very complex geometry, the flow that develops in such industrial machines is extremely hard to predict. At this time, the most popular method to simulate these flows is still based on a Reynolds-averaged Navier–Stokes approach. However, there is some evidence that this formalism is not accurate for these components, especially when a description of time-dependent turbulent flows is desired. With the increase in computing power, large eddy simulation (LES) emerges as a promising technique to improve both knowledge of complex physics and reliability of flow solver predictions. The objective of the paper is thus to give an overview of the current status of LES for industrial compressor flows as well as to propose future research axes regarding the use of LES for compressor design. While the use of wall-resolved LES for industrial multistage compressors at realistic Reynolds number should not be ready before 2035, some possibilities exist to reduce the cost of LES, such as wall modelling and the adaptation of the phase-lag condition. This paper also points out the necessity to combine LES to techniques able to tackle complex geometries. Indeed LES alone, i.e. without prior knowledge of such flows for grid construction or the prohibitive yet ideal use of fully homogeneous meshes to predict compressor flows, is quite limited today. PMID:25024422
Project management for complex ground-based instruments: MEGARA plan
NASA Astrophysics Data System (ADS)
García-Vargas, María. Luisa; Pérez-Calpena, Ana; Gil de Paz, Armando; Gallego, Jesús; Carrasco, Esperanza; Cedazo, Raquel; Iglesias, Jorge
2014-08-01
The project management of complex instruments for ground-based large telescopes is a challenge itself. A good management is a clue for project success in terms of performance, schedule and budget. Being on time has become a strict requirement for two reasons: to assure the arrival at the telescope due to the pressure on demanding new instrumentation for this first world-class telescopes and to not fall in over-costs. The budget and cash-flow is not always the expected one and has to be properly handled from different administrative departments at the funding centers worldwide distributed. The complexity of the organizations, the technological and scientific return to the Consortium partners and the participation in the project of all kind of professional centers working in astronomical instrumentation: universities, research centers, small and large private companies, workshops and providers, etc. make the project management strategy, and the tools and procedures tuned to the project needs, crucial for success. MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is a facility instrument of the 10.4m GTC (La Palma, Spain) working at optical wavelengths that provides both Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) capabilities at resolutions in the range R=6,000-20,000. The project is an initiative led by Universidad Complutense de Madrid (Spain) in collaboration with INAOE (Mexico), IAA-CSIC (Spain) and Universidad Politécnica de Madrid (Spain). MEGARA is being developed under contract with GRANTECAN.
Current audiological diagnostics
Hoth, Sebastian; Baljić, Izet
2017-01-01
Today’s audiological functional diagnostics is based on a variety of hearing tests, whose large number takes account of the variety of malfunctions of a complex sensory organ system and the necessity to examine it in a differentiated manner and at any age of life. The objective is to identify nature and origin of the hearing loss and to quantify its extent as far as necessary to dispose of the information needed to initiate the adequate medical (conservative or operational) treatment or the provision with technical hearing aids or prostheses. Moreover, audiometry provides the basis for the assessment of impairment and handicap as well as for the calculation of the degree of disability. In the present overview, the current state of the method inventory available for practical use is described, starting from basic diagnostics over to complex special techniques. The presentation is systematically grouped in subjective procedures, based on psychoacoustic exploration, and objective methods, based on physical measurements: preliminary hearing tests, pure tone threshold, suprathreshold processing of sound intensity, directional hearing, speech understanding in quiet and in noise, dichotic hearing, tympanogram, acoustic reflex, otoacoustic emissions and auditory evoked potentials. Apart from a few still existing gaps, this method inventory covers the whole spectrum of all clinically relevant functional deficits of the auditory system. PMID:29279727
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
α7nAchR/NMDAR coupling affects NMDAR function and object recognition.
Li, Shupeng; Nai, Qiang; Lipina, Tatiana V; Roder, John C; Liu, Fang
2013-12-20
The α7 nicotinic acetylcholine receptor (nAchR) and NMDA glutamate receptor (NMDAR) are both ligand-gated ion channels permeable to Ca2+ and Na+. Previous studies have demonstrated functional modulation of NMDARs by nAchRs, although the molecular mechanism remains largely unknown. We have previously reported that α7nAchR forms a protein complex with the NMDAR through a protein-protein interaction. We also developed an interfering peptide that is able to disrupt the α7nAchR-NMDAR complex and blocks cue-induced reinstatement of nicotine-seeking in rat models of relapse. In the present study, we investigated whether the α7nAchR-NMDAR interaction is responsible for the functional modulation of NMDAR by α7nAchR using both electrophysiological and behavioral tests. We have found that activation of α7nAchR upregulates NMDAR-mediated whole cell currents and LTP of mEPSC in cultured hippocampal neurons, which can be abolished by the interfering peptide that disrupts the α7nAchR-NMDAR interaction. Moreover, administration of the interfering peptide in mice impairs novel object recognition but not Morris water maze performance. Our results suggest that α7nAchR/NMDAR coupling may selectively affect some aspects of learning and memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants' strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research.
Tschentscher, Nadja; Hauk, Olaf
2015-01-01
Mental arithmetic is a powerful paradigm to study problem solving using neuroimaging methods. However, the evaluation of task complexity varies significantly across neuroimaging studies. Most studies have parameterized task complexity by objective features such as the number size. Only a few studies used subjective rating procedures. In fMRI, we provided evidence that strategy self-reports control better for task complexity across arithmetic conditions than objective features (Tschentscher and Hauk, 2014). Here, we analyzed the relative predictive value of self-reported strategies and objective features for performance in addition and multiplication tasks, by using a paradigm designed for neuroimaging research. We found a superiority of strategy ratings as predictor of performance above objective features. In a Principal Component Analysis on reaction times, the first component explained over 90 percent of variance and factor loadings reflected percentages of self-reported strategies well. In multiple regression analyses on reaction times, self-reported strategies performed equally well or better than objective features, depending on the operation type. A Receiver Operating Characteristic (ROC) analysis confirmed this result. Reaction times classified task complexity better when defined by individual ratings. This suggests that participants’ strategy ratings are reliable predictors of arithmetic complexity and should be taken into account in neuroimaging research. PMID:26321997
Statistical Field Estimation for Complex Coastal Regions and Archipelagos (PREPRINT)
2011-04-09
and study the computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal...computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal regions and... multiscale free-surface code builds on the primitive-equation model of the Harvard Ocean Predic- tion System (HOPS, Haley et al. (2009)). Additionally
Unrewarded Object Combinations in Captive Parrots
Auersperg, Alice Marie Isabel; Oswald, Natalie; Domanegg, Markus; Gajdon, Gyula Koppany; Bugnyar, Thomas
2015-01-01
In primates, complex object combinations during play are often regarded as precursors of functional behavior. Here we investigate combinatory behaviors during unrewarded object manipulation in seven parrot species, including kea, African grey parrots and Goffin cockatoos, three species previously used as model species for technical problem solving. We further examine a habitually tool using species, the black palm cockatoo. Moreover, we incorporate three neotropical species, the yellow- and the black-billed Amazon and the burrowing parakeet. Paralleling previous studies on primates and corvids, free object-object combinations and complex object-substrate combinations such as inserting objects into tubes/holes or stacking rings onto poles prevailed in the species previously linked to advanced physical cognition and tool use. In addition, free object-object combinations were intrinsically structured in Goffin cockatoos and in kea. PMID:25984564
Visualization index for image-enabled medical records
NASA Astrophysics Data System (ADS)
Dong, Wenjie; Zheng, Weilin; Sun, Jianyong; Zhang, Jianguo
2011-03-01
With the widely use of healthcare information technology in hospitals, the patients' medical records are more and more complex. To transform the text- or image-based medical information into easily understandable and acceptable form for human, we designed and developed an innovation indexing method which can be used to assign an anatomical 3D structure object to every patient visually to store indexes of the patients' basic information, historical examined image information and RIS report information. When a doctor wants to review patient historical records, he or she can first load the anatomical structure object and the view the 3D index of this object using a digital human model tool kit. This prototype system helps doctors to easily and visually obtain the complete historical healthcare status of patients, including large amounts of medical data, and quickly locate detailed information, including both reports and images, from medical information systems. In this way, doctors can save time that may be better used to understand information, obtain a more comprehensive understanding of their patients' situations, and provide better healthcare services to patients.
Stein, Michelle B; Slavin-Mulford, Jenelle; Sinclair, S Justin; Siefert, Caleb J; Blais, Mark A
2012-01-01
The Social Cognition and Object Relations Scale-Global rating method (SCORS-G; Stein, Hilsenroth, Slavin-Mulford, & Pinsker, 2011; Westen, 1995) measures the quality of object relations in narrative material. This study employed a multimethod approach to explore the structure and construct validity of the SCORS-G. The Thematic Apperception Test (TAT; Murray, 1943) was administered to 59 patients referred for psychological assessment at a large Northeastern U.S. hospital. The resulting 301 TAT narratives were rated using the SCORS-G method. The 8 SCORS variables were found to have high interrater reliability and good internal consistency. Principal components analysis revealed a 3-component solution with components tapping emotions/affect regulation in relationships, self-image, and aspects of cognition. Next, the construct validity of the SCORS-G components was explored using measures of intellectual and executive functioning, psychopathology, and normal personality. The 3 SCORS-G components showed unique and theoretically meaningful relationships across these broad and diverse psychological measures. This study demonstrates the value of using a standardized scoring method, like the SCORS-G, to reveal the rich and complex nature of narrative material.
Disrupting frontal eye-field activity impairs memory recall.
Wantz, Andrea L; Martarelli, Corinna S; Cazzoli, Dario; Kalla, Roger; Müri, René; Mast, Fred W
2016-04-13
A large body of research demonstrated that participants preferably look back to the encoding location when retrieving visual information from memory. However, the role of this 'looking back to nothing' is still debated. The goal of the present study was to extend this line of research by examining whether an important area in the cortical representation of the oculomotor system, the frontal eye field (FEF), is involved in memory retrieval. To interfere with the activity of the FEF, we used inhibitory continuous theta burst stimulation (cTBS). Before stimulation was applied, participants encoded a complex scene and performed a short-term (immediately after encoding) or long-term (after 24 h) recall task, just after cTBS over the right FEF or sham stimulation. cTBS did not affect overall performance, but stimulation and statement type (object vs. location) interacted. cTBS over the right FEF tended to impair object recall sensitivity, whereas there was no effect on location recall sensitivity. These findings suggest that the FEF is involved in retrieving object information from scene memory, supporting the hypothesis that the oculomotor system contributes to memory recall.
Object reasoning for waste remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennock, K.A.; Bohn, S.J.; Franklin, A.L.
1991-08-01
A large number of contaminated waste sites across the United States await size remediation efforts. These sites can be physically complex, composed of multiple, possibly interacting, contaminants distributed throughout one or more media. The Remedial Action Assessment System (RAAS) is being designed and developed to support decisions concerning the selection of remediation alternatives. The goal of this system is to broaden the consideration of remediation alternatives, while reducing the time and cost of making these considerations. The Remedial Action Assessment System is a hybrid system, designed and constructed using object-oriented, knowledge- based systems, and structured programming techniques. RAAS uses amore » combination of quantitative and qualitative reasoning to consider and suggest remediation alternatives. The reasoning process that drives this application is centered around an object-oriented organization of remediation technology information. This paper describes the information structure and organization used to support this reasoning process. In addition, the paper describes the level of detail of the technology related information used in RAAS, discusses required assumptions and procedural implications of these assumptions, and provides rationale for structuring RAAS in this manner. 3 refs., 3 figs.« less
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David
1989-01-01
Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
Zhang, Xiulan; Bloom, Gerald; Xu, Xiaoxin; Chen, Lin; Liang, Xiaoyun; Wolcott, Sara J
2014-08-26
This paper explores the evolution of schemes for rural finance in China as a case study of the long and complex process of health system development. It argues that the evolution of these schemes has been the outcome of the response of a large number of agents to a rapidly changing context and of efforts by the government to influence this adaptation process and achieve public health goals. The study draws on several sources of data including a review of official policy documents and academic papers and in-depth interviews with key policy actors at national level and at a sample of localities. The study identifies three major transition points associated with changes in broad development strategy and demonstrates how the adaptation of large numbers of actors to these contextual changes had a major impact on the performance of the health system. Further, it documents how the Ministry of Health viewed its role as both an advocate for the interests of health facilities and health workers and as the agency responsible for ensuring that government health system objectives were met. It is argued that a major reason for the resilience of the health system and its ability to adapt to rapid economic and institutional change was the ability of the Ministry to provide overall strategy leadership. Additionally, it postulates that a number of interest groups have emerged, which now also seek to influence the pathway of health system development. This history illustrates the complex and political nature of the management of health system development and reform. The paper concludes that governments will need to increase their capacity to analyze the health sector as a complex system and to manage change processes.
Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera.
Hong, T H; Shneier, M O
1985-06-01
This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.
A Survey of Complex Object Technologies for Digital Libraries
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Argue, Brad; Efron, Miles; Denn, Sheila; Pattuelli, Maria Cristina
2001-01-01
Many early web-based digital libraries (DLs) had implicit assumptions reflected in their architecture that the unit of focus in the DL (frequently "reports" or "e-prints") would only be manifested in a single, or at most a few, common file formats such as PDF or PostScript. DLs have now matured to the point where their contents are commonly no longer simple files. Complex objects in DLs have emerged from in response to various requirements, including: simple aggregation of formats and supporting files, bundling additional information to aid digital preservation, creating opaque digital objects for e-commerce applications, and the incorporation of dynamic services with the traditional data files. We examine a representative (but not necessarily exhaustive) number of current and recent historical web-based complex object technologies and projects that are applicable to DLs: Aurora, Buckets, ComMentor, Cryptolopes, Digibox, Document Management Alliance, FEDORA, Kahn-Wilensky Framework Digital Objects, Metadata Encoding & Transmission Standard, Multivalent Documents, Open eBooks, VERS Encapsulated Objects, and the Warwick Framework.
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
Hardman, Kyle; Cowan, Nelson
2014-01-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739
Reducing the complexity of the software design process with object-oriented design
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Designing software is a complex process. How object-oriented design (OOD), coupled with formalized documentation and tailored object diagraming techniques, can reduce the complexity of the software design process is described and illustrated. The described OOD methodology uses a hierarchical decomposition approach in which parent objects are decomposed into layers of lower level child objects. A method of tracking the assignment of requirements to design components is also included. Increases in the reusability, portability, and maintainability of the resulting products are also discussed. This method was built on a combination of existing technology, teaching experience, consulting experience, and feedback from design method users. The discussed concepts are applicable to hierarchal OOD processes in general. Emphasis is placed on improving the design process by documenting the details of the procedures involved and incorporating improvements into those procedures as they are developed.
NASA Astrophysics Data System (ADS)
Rodriguez Gonzalez, Beatriz
2008-04-01
Much of the homotopical and homological structure of the categories of chain complexes and topological spaces can be deduced from the existence and properties of the 'simple' functors Tot : {double chain complexes} -> {chain complexes} and geometric realization : {sSets} -> {Top}, or similarly, Tot : {simplicial chain complexes} -> {chain complexes} and | | : {sTop} -> {Top}. The purpose of this thesis is to abstract this situation, and to this end we introduce the notion of '(co)simplicial descent category'. It is inspired by Guillen-Navarros's '(cubical) descent categories'. The key ingredients in a (co)simplicial descent category D are a class E of morphisms in D, called equivalences, and a 'simple' functor s : {(co)simplicial objects in D} -> D. They must satisfy axioms like 'Eilenberg-Zilber', 'exactness' and 'acyclicity'. This notion covers a wide class of examples, as chain complexes, sSets, topological spaces, filtered cochain complexes (where E = filtered quasi-isomorphisms or E = E_2-isomorphisms), commutative differential graded algebras (with s = Navarro's Thom-Whitney simple), DG-modules over a DG-category and mixed Hodge complexes, where s = Deligne's simple. From the simplicial descent structure we obtain homotopical structure on D, as cone and cylinder objects. We use them to i) explicitly describe the morphisms of HoD=D[E^{-1}] similarly to the case of calculus of fractions; ii) endow HoD with a non-additive pre-triangulated structure, that becomes triangulated in the stable additive case. These results use the properties of a 'total functor', which associates to any biaugmented bisimplicial object a simplicial object. It is the simplicial analogue of the total chain complex of a double complex, and it is left adjoint to Illusie's 'decalage' functor.
Predictability, Force and (Anti-)Resonance in Complex Object Control.
Maurice, Pauline; Hogan, Neville; Sternad, Dagmar
2018-04-18
Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
Composition analysis by scanning femtosecond laser ultraprobing (CASFLU).
Ishikawa, Muriel Y.; Wood, Lowell L.; Campbell, E. Michael; Stuart, Brent C.; Perry, Michael D.
2002-01-01
The composition analysis by scanning femtosecond ultraprobing (CASFLU) technology scans a focused train of extremely short-duration, very intense laser pulses across a sample. The partially-ionized plasma ablated by each pulse is spectrometrically analyzed in real time, determining the ablated material's composition. The steering of the scanned beam thus is computer directed to either continue ablative material-removal at the same site or to successively remove nearby material for the same type of composition analysis. This invention has utility in high-speed chemical-elemental, molecular-fragment and isotopic analyses of the microstructure composition of complex objects, e.g., the oxygen isotopic compositions of large populations of single osteons in bone.
[Robotic general surgery: where do we stand in 2013?].
Buchs, Nicolas C; Pugin, François; Ris, Frédéric; Jung, Minoa; Hagen, Monika E; Volonté, Francesco; Azagury, Dan; Morel, Philippe
2013-06-19
While the number of publications concerning robotic surgery is increasing, the level of evidence remains to be improved. The safety of robotic approach has been largely demonstrated, even for complex procedures. Yet, the objective advantages of this technology are still lacking in several fields, notably in comparison to laparoscopy. On the other hand, the development of robotic surgery is on its way, as the enthusiasm of the public and the surgical community can testify. Still, clear clinical indications remain to be determined in the field of general surgery. The study aim is to review the current literature on robotic general surgery and to give the reader an overview in 2013.
3-D interactive visualisation tools for Hi spectral line imaging
NASA Astrophysics Data System (ADS)
van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.
2017-06-01
Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.
Soft Robotic Manipulation and Locomotion with a 3D Printed Electroactive Hydrogel.
Han, Daehoon; Farino, Cindy; Yang, Chen; Scott, Tracy; Browe, Daniel; Choi, Wonjoon; Freeman, Joseph W; Lee, Howon
2018-05-30
Electroactive hydrogels (EAH) that exhibit large deformation in response to an electric field have received great attention as a potential actuating material for soft robots and artificial muscle. However, their application has been limited due to the use of traditional two-dimensional (2D) fabrication methods. Here we present soft robotic manipulation and locomotion with 3D printed EAH microstructures. Through 3D design and precise dimensional control enabled by a digital light processing (DLP) based micro 3D printing technique, complex 3D actuations of EAH are achieved. We demonstrate soft robotic actuations including gripping and transporting an object and a bidirectional locomotion.
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
3-D flow and scour near a submerged wing dike: ADCP measurements on the Missouri River
Jamieson, E.C.; Rennie, C.D.; Jacobson, R.B.; Townsend, R.D.
2011-01-01
Detailed mapping of bathymetry and three-dimensional water velocities using a boat-mounted single-beam sonar and acoustic Doppler current profiler (ADCP) was carried out in the vicinity of two submerged wing dikes located in the Lower Missouri River near Columbia, Missouri. During high spring flows the wing dikes become submerged, creating a unique combination of vertical flow separation and overtopping (plunging) flow conditions, causing large-scale three-dimensional turbulent flow structures to form. On three different days and for a range of discharges, sampling transects at 5 and 20 m spacing were completed, covering the area adjacent to and upstream and downstream from two different wing dikes. The objectives of this research are to evaluate whether an ADCP can identify and measure large-scale flow features such as recirculating flow and vortex shedding that develop in the vicinity of a submerged wing dike; and whether or not moving-boat (single-transect) data are sufficient for resolving complex three-dimensional flow fields. Results indicate that spatial averaging from multiple nearby single transects may be more representative of an inherently complex (temporally and spatially variable) three-dimensional flow field than repeated single transects. Results also indicate a correspondence between the location of calculated vortex cores (resolved from the interpolated three-dimensional flow field) and the nearby scour holes, providing new insight into the connections between vertically oriented coherent structures and local scour, with the unique perspective of flow and morphology in a large river.
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.
Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B
1995-01-01
The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258
Autonomous Space Object Catalogue Construction and Upkeep Using Sensor Control Theory
NASA Astrophysics Data System (ADS)
Moretti, N.; Rutten, M.; Bessell, T.; Morreale, B.
The capability to track objects in space is critical to safeguard domestic and international space assets. Infrequent measurement opportunities, complex dynamics and partial observability of orbital state makes the tracking of resident space objects nontrivial. It is not uncommon for human operators to intervene with space tracking systems, particularly in scheduling sensors. This paper details the development of a system that maintains a catalogue of geostationary objects through dynamically tasking sensors in real time by managing the uncertainty of object states. As the number of objects in space grows the potential for collision grows exponentially. Being able to provide accurate assessment to operators regarding costly collision avoidance manoeuvres is paramount; the accuracy of which is highly dependent on how object states are estimated. The system represents object state and uncertainty using particles and utilises a particle filter for state estimation. Particle filters capture the model and measurement uncertainty accurately, allowing for a more comprehensive representation of the state’s probability density function. Additionally, the number of objects in space is growing disproportionally to the number of sensors used to track them. Maintaining precise positions for all objects places large loads on sensors, limiting the time available to search for new objects or track high priority objects. Rather than precisely track all objects our system manages the uncertainty in orbital state for each object independently. The uncertainty is allowed to grow and sensor data is only requested when the uncertainty must be reduced. For example when object uncertainties overlap leading to data association issues or if the uncertainty grows to beyond a field of view. These control laws are formulated into a cost function, which is optimised in real time to task sensors. By controlling an optical telescope the system has been able to construct and maintain a catalogue of approximately 100 geostationary objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mainzer, A.; Masiero, J.; Hand, E.
The NEOWISE data set offers the opportunity to study the variations in albedo for asteroid classification schemes based on visible and near-infrared observations for a large sample of minor planets. We have determined the albedos for nearly 1900 asteroids classified by the Tholen, Bus, and Bus-DeMeo taxonomic classification schemes. We find that the S-complex spans a broad range of bright albedos, partially overlapping the low albedo C-complex at small sizes. As expected, the X-complex covers a wide range of albedos. The multiwavelength infrared coverage provided by NEOWISE allows determination of the reflectivity at 3.4 and 4.6 {mu}m relative to themore » visible albedo. The direct computation of the reflectivity at 3.4 and 4.6 {mu}m enables a new means of comparing the various taxonomic classes. Although C, B, D, and T asteroids all have similarly low visible albedos, the D and T types can be distinguished from the C and B types by examining their relative reflectance at 3.4 and 4.6 {mu}m. All of the albedo distributions are strongly affected by selection biases against small, low albedo objects, as all objects selected for taxonomic classification were chosen according to their visible light brightness. Due to these strong selection biases, we are unable to determine whether or not there are correlations between size, albedo, and space weathering. We argue that the current set of classified asteroids makes any such correlations difficult to verify. A sample of taxonomically classified asteroids drawn without significant albedo bias is needed in order to perform such an analysis.« less
Medicaid's Complex Goals: Challenges for Managed Care and Behavioral Health
Gold, Marsha; Mittler, Jessica
2000-01-01
The Medicaid program has become increasingly complex as policymakers use it to address various policy objectives, leading to structural tensions that surface with Medicaid managed care. In this article, we illustrate this complexity by focusing on the experience of three States with behavioral health carveouts—Maryland, Oregon, and Tennessee. Converting to Medicaid managed care forces policymakers to confront Medicaid's competing policy objectives, multiplicity of stakeholders, and diverse patients, many with complex needs. Emerging Medicaid managed care systems typically represent compromises in which existing inequities and fragmentation are reconfigured rather than eliminated. PMID:12500322
NASA Astrophysics Data System (ADS)
Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen
2016-04-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].
Care coordination of multimorbidity: a scoping study
Burau, Viola
2015-01-01
Background A key challenge in healthcare systems worldwide is the large number of patients who suffer from multimorbidity; despite this, most systems are organized within a single-disease framework. Objective The present study addresses two issues: the characteristics and preconditions of care coordination for patients with multimorbidity; and the factors that promote or inhibit care coordination at the levels of provider organizations and healthcare professionals. Design The analysis is based on a scoping study, which combines a systematic literature search with a qualitative thematic analysis. The search was conducted in November 2013 and included the PubMed, CINAHL, and Web of Science databases, as well as the Cochrane Library, websites of relevant organizations and a hand-search of reference lists. The analysis included studies with a wide range of designs, from industrialized countries, in English, German and the Scandinavian languages, which focused on both multimorbidity/comorbidity and coordination of integrated care. Results The analysis included 47 of the 226 identified studies. The central theme emerging was complexity. This related to both specific medical conditions of patients with multimorbidity (case complexity) and the organization of care delivery at the levels of provider organizations and healthcare professionals (care complexity). Conclusions In terms of how to approach care coordination, one approach is to reduce complexity and the other is to embrace complexity. Either way, future research must take a more explicit stance on complexity and also gain a better understanding of the role of professionals as a prerequisite for the development of new care coordination interventions. PMID:29090157
Engineering the object-relation database model in O-Raid
NASA Technical Reports Server (NTRS)
Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat
1989-01-01
Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.
NASA Astrophysics Data System (ADS)
Dal Moro, Giancarlo; Moustafa, Sayed S. R.; Al-Arifi, Nassir S.
2018-01-01
Rayleigh waves often propagate according to complex mode excitation so that the proper identification and separation of specific modes can be quite difficult or, in some cases, just impossible. Furthermore, the analysis of a single component (i.e., an inversion procedure based on just one objective function) necessarily prevents solving the problems related to the non-uniqueness of the solution. To overcome these issues and define a holistic analysis of Rayleigh waves, we implemented a procedure to acquire data that are useful to define and efficiently invert the three objective functions defined from the three following "objects": the velocity spectra of the vertical- and radial-components and the Rayleigh-wave particle motion (RPM) frequency-offset data. Two possible implementations are presented. In the first case we consider classical multi-offset (and multi-component) data, while in a second possible approach we exploit the data recorded by a single three-component geophone at a fixed offset from the source. Given the simple field procedures, the method could be particularly useful for the unambiguous geotechnical exploration of large areas, where more complex acquisition procedures, based on the joint acquisition of Rayleigh and Love waves, would not be economically viable. After illustrating the different kinds of data acquisition and the data processing, the results of the proposed methodology are illustrated in a case study. Finally, a series of theoretical and practical aspects are discussed to clarify some issues involved in the overall procedure (data acquisition and processing).
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Collisional family structure within the Nysa-Polana complex
NASA Astrophysics Data System (ADS)
Dykhuis, Melissa J.; Greenberg, Richard
2015-05-01
The Nysa-Polana complex is a group of low-inclination asteroid families in the inner main belt, bounded in semimajor axis by the Mars-crossing region and the Jupiter 3:1 mean-motion resonance. This group is important as the most likely source region for the target of the OSIRIS-REx mission, (101955) Bennu; however, family membership in the region is complicated by the presence of several dynamically overlapping families with a range of surface reflectance properties. The large S-type structure in the region appears to be associated with the parent body (135) Hertha, and displays an (eP,aP) correlation consistent with a collision event near true anomaly of ∼180° with ejecta velocity vej ∼ 285m /s . The ejecta distribution from a collision with these orbital properties is predicted to have a maximum semimajor axis dispersion of δaej = 0.005 ± 0.008AU , which constitutes only a small fraction (7%) of the observed semimajor axis dispersion, the rest of which is attributed to the Yarkovsky effect. The age of the family is inferred from the Yarkovsky dispersion to be 300-50+60 My. Objects in a smaller cluster that overlaps the large Hertha family in proper orbital element space have reflectance properties more consistent with the X-type (135) Hertha than the surrounding S-type family. These objects form a distinct Yarkovsky "V" signature in (aP, H) space, consistent with a more recent collision, which appears to also be dynamically connected to (135) Hertha. Production of two families with different reflectance properties from a single parent could result from the partial differentiation of the parent, shock darkening effects, or other causes. The Nysa-Polana complex also contains a low-albedo family associated with (142) Polana (called "New Polana" by Walsh et al. (Walsh, K.J. et al. [2013]. Icarus 225, 283-297)), and two other low-albedo families associated with (495) Eulalia. The second Eulalia family may be a high-aP , low-eP , low-iP component of the first Eulalia family-forming collision, possibly explained by an anisotropic ejection field.
The prevalence of resonances among large-a transneptunian objects
NASA Astrophysics Data System (ADS)
Gladman, Brett; Volk, Kathryn; Van Laerhoven, Christa
2018-04-01
The detached population consists of transneptunian objects (TNOs) with large semi-major axes and sufficiently high perihelia (roughly q>38 au, but there is no simple cut). However, what constitutes 'large semi-major axis' has been, and continues to be, unclear. Once beyond the apehlia of the classical Kuiper Belt (which extends out to about 60 au), objects with semimajor axes from a=60-150 au can be detached, but there are a reasonable number of objects in this range known to be in mean-motion resonances with Neptune. Beyond a=150 au, however, it is a widely-held belief that resonances become `unimportant', and that a q>38 au cut (or sometimes q>50 au) with a>150 au isolates a set of large semimajor axis detached objects. However, once semimajor axes become this large, the orbit determination of the object discovered near perihelion becomes a much harder task then for low-a TNOs. Because small velocity differences near the perihelion of large-a orbits cause large changes the fitted orbital in semimajor axis, extremely good and long baseline astrometry is required to reduce the semimajor axis uncertainty to be smaller than the few tenths of an astronomical unit widths of mean motion resonances. By carefully analyzing the astrometric data of all known large semimajor axis objects, we show that a very large fraction of the objects are in fact likely in high-order mean-motion resonances with Neptune. This prevealence for actually being resonant with Neptune would imply that hypothesized planets are problematic as they would remove the detached objects from these resonances. Instead, we favor a view in which the large-a population is the surviving remnant of a massive early scattering disk, whose surviving members are sculpted mostly by diffusive gravitational interactions with the four giant planets over the last four gigayears, but whose initial emplacement mechanism (in particular: perihelion lifting mechanism) is still unclear but of critical importance to the early Solar System's evolution.
Recent progress in 3-D imaging of sea freight containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank
The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only amore » relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.« less
Structured decision making as a framework for large-scale wildlife harvest management decisions
Robinson, Kelly F.; Fuller, Angela K.; Hurst, Jeremy E.; Swift, Bryan L.; Kirsch, Arthur; Farquhar, James F.; Decker, Daniel J.; Siemer, William F.
2016-01-01
Fish and wildlife harvest management at large spatial scales often involves making complex decisions with multiple objectives and difficult tradeoffs, population demographics that vary spatially, competing stakeholder values, and uncertainties that might affect management decisions. Structured decision making (SDM) provides a formal decision analytic framework for evaluating difficult decisions by breaking decisions into component parts and separating the values of stakeholders from the scientific evaluation of management actions and uncertainty. The result is a rigorous, transparent, and values-driven process. This decision-aiding process provides the decision maker with a more complete understanding of the problem and the effects of potential management actions on stakeholder values, as well as how key uncertainties can affect the decision. We use a case study to illustrate how SDM can be used as a decision-aiding tool for management decision making at large scales. We evaluated alternative white-tailed deer (Odocoileus virginianus) buck-harvest regulations in New York designed to reduce harvest of yearling bucks, taking into consideration the values of the state wildlife agency responsible for managing deer, as well as deer hunters. We incorporated tradeoffs about social, ecological, and economic management concerns throughout the state. Based on the outcomes of predictive models, expert elicitation, and hunter surveys, the SDM process identified management alternatives that optimized competing objectives. The SDM process provided biologists and managers insight about aspects of the buck-harvest decision that helped them adopt a management strategy most compatible with diverse hunter values and management concerns.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Laboratory Needs for Interstellar Ice Studies
NASA Astrophysics Data System (ADS)
Boogert, Abraham C. A.
2012-05-01
A large fraction of the molecules in dense interstellar and circumstellar environments is stored in icy grain mantles. The mantles are formed by a complex interplay between chemical and physical processes. Key questions on the accretion and desorption processes and the chemistry on the grain surfaces and within the icy mantles can only be answered by laboratory experiments. Recent infrared (2-30 micron) spectroscopic surveys of large samples of Young Stellar Objects (YSOs) and background stars tracing quiescent cloud material have shown that the ice band profiles and depths vary considerably as a function of environment. Using laboratory spectra in the identification process, it is clear that a rather complex mixture of simple species (CH3OH, CO2, H2O, CO) exists even in the quiescent cloud phase. Variations of the local physical conditions (CO freeze out) and time scales (CH3OH formation) appear to be key factors in the observed variations. Sublimation and thermal processing dominate as YSOs heat their environments. The identification of several ice absorption features is still disputed. I will outline laboratory work (e.g., on salts, PAHs, and aliphatic hydrocarbons) needed to further constrain the ice band identification as well as the thermal and chemical history of the carriers. Such experiments will also be essential to interpret future high spectral resolution SOFIA and JWST observations.
Cultural macroevolution matters
Gray, Russell D.
2017-01-01
Evolutionary thinking can be applied to both cultural microevolution and macroevolution. However, much of the current literature focuses on cultural microevolution. In this article, we argue that the growing availability of large cross-cultural datasets facilitates the use of computational methods derived from evolutionary biology to answer broad-scale questions about the major transitions in human social organization. Biological methods can be extended to human cultural evolution. We illustrate this argument with examples drawn from our recent work on the roles of Big Gods and ritual human sacrifice in the evolution of large, stratified societies. These analyses show that, although the presence of Big Gods is correlated with the evolution of political complexity, in Austronesian cultures at least, they do not play a causal role in ratcheting up political complexity. In contrast, ritual human sacrifice does play a causal role in promoting and sustaining the evolution of stratified societies by maintaining and legitimizing the power of elites. We briefly discuss some common objections to the application of phylogenetic modeling to cultural evolution and argue that the use of these methods does not require a commitment to either gene-like cultural inheritance or to the view that cultures are like vertebrate species. We conclude that the careful application of these methods can substantially enhance the prospects of an evolutionary science of human history. PMID:28739960
Design and Implementation of a Metadata-rich File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ames, S; Gokhale, M B; Maltzahn, C
2010-01-19
Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less
INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?
Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P
2015-01-01
Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.
Neighborhood Contributions to Racial and Ethnic Disparities in Obesity Among New York City Adults.
Lim, Sungwoo; Harris, Tiffany G
2015-01-01
Objectives. We assessed neighborhood confounding on racial/ethnic obesity disparities among adults in New York City after accounting for complex sampling, and how much neighborhood factors (walkability, percentage Black or Hispanic, poverty) contributed to this effect. Methods. We combined New York City Community Health Survey 2002-2004 data with Census 2000 zip code-level data. We estimated odds ratios (ORs) for obesity with 2 sets of regression analyses. First, we used the method incorporating the conditional pseudolikelihood into complex sample adjustment. Second, we compared ORs for race/ethnicity from a conventional multilevel model for each neighborhood factor with those from a hybrid fixed-effect model. Results. The weighted estimate for obesity for Blacks versus Whites (OR = 1.8; 95% confidence interval = 1.6, 2.0) was attenuated when we controlled neighborhood confounding (OR = 1.4; 95% confidence interval = 1.2, 1.6; first analysis). Percentage of Blacks in the neighborhood made a large contribution whereas the walkability contribution was minimal (second analysis). Conclusions. Percentage of Blacks in New York City neighborhoods explained a large portion of the disparity in obesity between Blacks and Whites. The study highlights the importance of estimating valid neighborhood effects for public health surveillance and intervention.
Dynamic modeling of spacecraft in a collisionless plasma
NASA Technical Reports Server (NTRS)
Katz, I.; Parks, D. E.; Wang, S. S.; Wilson, A.
1977-01-01
A new computational model is described which can simulate the charging of complex geometrical objects in three dimensions. Two sample calculations are presented. In the first problem, the capacitance to infinity of a complex object similar to a satellite with solar array paddles is calculated. The second problem concerns the dynamical charging of a conducting cube partially covered with a thin dielectric film. In this calculation, the photoemission results in differential charging of the object.
Prasad, Raghu; Muniyandi, Manivannan; Manoharan, Govindan; Chandramohan, Servarayan M
2018-05-01
The purpose of this study was to examine the face and construct validity of a custom-developed bimanual laparoscopic force-skills trainer with haptics feedback. The study also examined the effect of handedness on fundamental and complex tasks. Residents (n = 25) and surgeons (n = 25) performed virtual reality-based bimanual fundamental and complex tasks. Tool-tissue reaction forces were summed, recorded, and analysed. Seven different force-based measures and a 1-time measure were used as metrics. Subsequently, participants filled out face validity and demographic questionnaires. Residents and surgeons were positive on the design, workspace, and usefulness of the simulator. Construct validity results showed significant differences between residents and experts during the execution of fundamental and complex tasks. In both tasks, residents applied large forces with higher coefficient of variation and force jerks (P < .001). Experts, with their dominant hand, applied lower forces in complex tasks and higher forces in fundamental tasks (P < .001). The coefficients of force variation (CoV) of residents and experts were higher in complex tasks (P < .001). Strong correlations were observed between CoV and task time for fundamental (r = 0.70) and complex tasks (r = 0.85). Range of smoothness of force was higher for the non-dominant hand in both fundamental and complex tasks. The simulator was able to differentiate the force-skills of residents and surgeons, and objectively evaluate the effects of handedness on laparoscopic force-skills. Competency-based laparoscopic skills assessment curriculum should be updated to meet the requirements of bimanual force-based training.
Optimizing communication satellites payload configuration with exact approaches
NASA Astrophysics Data System (ADS)
Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi
2015-12-01
The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
Planning the Recreational-Educational Complex of the Alabama Space and Rocket Center.
ERIC Educational Resources Information Center
Burkhalter, Bettye B.; Kartis, Alexia M.
1983-01-01
Planning for the Alabama Space and Rocket Center's new recreational-educational complex included (1) goal establishment, (2) needs assessment (including accessibility for the disabled), (3) environmental impact analysis, (4) formulation of objectives and priorities, and (5) strategy development to meet objectives, as well as preparation of a…
Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim
2004-04-10
A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.
Single-frequency 3D synthetic aperture imaging with dynamic metasurface antennas.
Boyarsky, Michael; Sleasman, Timothy; Pulido-Mancera, Laura; Diebold, Aaron V; Imani, Mohammadreza F; Smith, David R
2018-05-20
Through aperture synthesis, an electrically small antenna can be used to form a high-resolution imaging system capable of reconstructing three-dimensional (3D) scenes. However, the large spectral bandwidth typically required in synthetic aperture radar systems to resolve objects in range often requires costly and complex RF components. We present here an alternative approach based on a hybrid imaging system that combines a dynamically reconfigurable aperture with synthetic aperture techniques, demonstrating the capability to resolve objects in three dimensions (3D), with measurements taken at a single frequency. At the core of our imaging system are two metasurface apertures, both of which consist of a linear array of metamaterial irises that couple to a common waveguide feed. Each metamaterial iris has integrated within it a diode that can be biased so as to switch the element on (radiating) or off (non-radiating), such that the metasurface antenna can produce distinct radiation profiles corresponding to different on/off patterns of the metamaterial element array. The electrically large size of the metasurface apertures enables resolution in range and one cross-range dimension, while aperture synthesis provides resolution in the other cross-range dimension. The demonstrated imaging capabilities of this system represent a step forward in the development of low-cost, high-performance 3D microwave imaging systems.
NASA Astrophysics Data System (ADS)
Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.
2018-05-01
Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.
Skivington, Kathryn; Lifshen, Marni; Mustard, Cameron
2016-01-01
BACKGROUND: Comprehensive workplace return-to-work policies, applied with consistency, can reduce length of time out of work and the risk of long-term disability. This paper reports on the findings from a qualitative study exploring managers’ and return-to-work-coordinators’ views on the implementation of their organization’s new return-to-work program. OBJECTIVES: To provide practical guidance to organizations in designing and implementing return-to-work programs for their employees. METHODS: Semi-structured qualitative interviews were undertaken with 20 managers and 10 return-to-work co-ordinators to describe participants’ perspectives on the progress of program implementation in the first 18 months of adoption. The study was based in a large healthcare organization in Ontario, Canada. Thematic analysis of the data was conducted. RESULTS: We identified tensions evident in the early implementation phase of the organization’s return-to-work program. These tensions were attributed to uncertainties concerning roles and responsibilities and to circumstances where objectives or principles appeared to be in conflict. CONCLUSIONS: The implementation of a comprehensive and collaborative return-to-work program is a complex challenge. The findings described in this paper may provide helpful guidance for organizations embarking on the development and implementation of a return-to-work program. PMID:27792035
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.
1998-01-01
Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.
NASA Astrophysics Data System (ADS)
Alexandersen, Mike; Benecchi, Susan D.; Chen, Ying-Tung; Schwamb, Megan Elizabeth; Wang, Shiang-Yu; Lehner, Matthew; Gladman, Brett; Kavelaars, JJ; Petit, Jean-Marc; Bannister, Michele T.; Gwyn, Stephen; Volk, Kathryn
2016-10-01
Lightcurves can reveal information about the gravitational processes that have acted on small bodies since their formation and/or their gravitational history.At the extremes, lightcurves can provide constraints on the material properties and interior structure of individual objects.In large sets, lightcurves can possibly shed light on the source of small body populations that did not form in place (such as the dynamically excited trans-Neptunian Objects (TNOs)).We have used the sparsely sampled photometry from the well characterized Outer Solar System Origins Survey (OSSOS) discovery and recovery observations to identify TNOs with potentially large amplitude lightcurves.Large lightcurve amplitudes would indicate that the objects are likely elongated or in potentially interesting spin states; however, this would need to be confirmed with further follow-up observations.We here present the results of a 6-hour pilot study of a subset of 17 OSSOS objects using Hyper Suprime-Cam (HSC) on the Subaru Telescope.Subaru's large aperture and HSC's large field of view allows us to obtain measurements on multiple objects with a range of magnitudes in each telescope pointing.Photometry was carefully measusured using an elongated aperture method to account for the motion of the objects, producing the short but precise lightcurves that we present here.The OSSOS objects span a large range of sizes, from as large as several hundred kilometres to as small as a few tens of kilometres in diameter.We are thus investigating smaller objects than previous light-curve projects have typically studied.
Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis
2009-12-01
We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.
Schendan, Haline E.; Ganis, Giorgio
2015-01-01
People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701
Singh, Prafull Kumar; Roukounakis, Aristomenis; Frank, Daniel O.; Kirschnek, Susanne; Das, Kushal Kumar; Neumann, Simon; Madl, Josef; Römer, Winfried; Zorzin, Carina; Borner, Christoph; Haimovici, Aladin; Garcia-Saez, Ana; Weber, Arnim; Häcker, Georg
2017-01-01
The Bcl-2 family protein Bim triggers mitochondrial apoptosis. Bim is expressed in nonapoptotic cells at the mitochondrial outer membrane, where it is activated by largely unknown mechanisms. We found that Bim is regulated by formation of large protein complexes containing dynein light chain 1 (DLC1). Bim rapidly inserted into cardiolipin-containing membranes in vitro and recruited DLC1 to the membrane. Bim binding to DLC1 induced the formation of large Bim complexes on lipid vesicles, on isolated mitochondria, and in intact cells. Native gel electrophoresis and gel filtration showed Bim-containing mitochondrial complexes of several hundred kilodaltons in all cells tested. Bim unable to form complexes was consistently more active than complexed Bim, which correlated with its substantially reduced binding to anti-apoptotic Bcl-2 proteins. At endogenous levels, Bim surprisingly bound only anti-apoptotic Mcl-1 but not Bcl-2 or Bcl-XL, recruiting only Mcl-1 into large complexes. Targeting of DLC1 by RNAi in human cell lines induced disassembly of Bim–Mcl-1 complexes and the proteasomal degradation of Mcl-1 and sensitized the cells to the Bcl-2/Bcl-XL inhibitor ABT-737. Regulation of apoptosis at mitochondria thus extends beyond the interaction of monomers of proapoptotic and anti-apoptotic Bcl-2 family members but involves more complex structures of proteins at the mitochondrial outer membrane, and targeting complexes may be a novel therapeutic strategy. PMID:28982759
Different Evolutionary Paths to Complexity for Small and Large Populations of Digital Organisms
2016-01-01
A major aim of evolutionary biology is to explain the respective roles of adaptive versus non-adaptive changes in the evolution of complexity. While selection is certainly responsible for the spread and maintenance of complex phenotypes, this does not automatically imply that strong selection enhances the chance for the emergence of novel traits, that is, the origination of complexity. Population size is one parameter that alters the relative importance of adaptive and non-adaptive processes: as population size decreases, selection weakens and genetic drift grows in importance. Because of this relationship, many theories invoke a role for population size in the evolution of complexity. Such theories are difficult to test empirically because of the time required for the evolution of complexity in biological populations. Here, we used digital experimental evolution to test whether large or small asexual populations tend to evolve greater complexity. We find that both small and large—but not intermediate-sized—populations are favored to evolve larger genomes, which provides the opportunity for subsequent increases in phenotypic complexity. However, small and large populations followed different evolutionary paths towards these novel traits. Small populations evolved larger genomes by fixing slightly deleterious insertions, while large populations fixed rare beneficial insertions that increased genome size. These results demonstrate that genetic drift can lead to the evolution of complexity in small populations and that purifying selection is not powerful enough to prevent the evolution of complexity in large populations. PMID:27923053
Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano
2015-12-01
The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ksanfomality, L. V.
2012-09-01
Data on the results of the analysis of the content of re-processed panorama of the VENERA-9 lander are presented. The panorama was transmitted historically for the first time from the surface of Venus in 1975. The low noise of the VENERA-9 data allowed allocating a large object of an unusual regular structure. Earlier, its fuzzy image was repeatedly cited in the literature being interpreted as a "strange stone". The complex shape and its other features suggest that the object may be a real habitant of the planet. It is not excluded that another similar object observed was damaged during the VENERA-9 landing. From the evidence of its movement and position of some other similar objects it is concluded that because of the limited energy capacity, the physical action of the Venusian fauna may be much slower than that of the Earth fauna. Another question considered is what sources of energy could be used by life in the conditions of the high temperature oxygenless atmosphere of the planet. It is natural to assume that, like on Earth, the Venusian fauna is heterotrophic and should be based on hypothetical flora, using photosynthesis (based on an unknown high temperature biophysical mechanism).
NASA Astrophysics Data System (ADS)
Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.
2016-11-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
Diwadkar, V A; Carpenter, P A; Just, M A
2000-07-01
Functional MRI was used to determine how the constituents of the cortical network subserving dynamic spatial working memory respond to two types of increases in task complexity. Participants mentally maintained the most recent location of either one or three objects as the three objects moved discretely in either a two- or three-dimensional array. Cortical activation in the dorsolateral prefrontal (DLPFC) and the parietal cortex increased as a function of the number of object locations to be maintained and the dimensionality of the display. An analysis of the response characteristics of the individual voxels showed that a large proportion were activated only when both the variables imposed the higher level of demand. A smaller proportion were activated specifically in response to increases in task demand associated with each of the independent variables. A second experiment revealed the same effect of dimensionality in the parietal cortex when the movement of objects was signaled auditorily rather than visually, indicating that the additional representational demands induced by 3-D space are independent of input modality. The comodulation of activation in the prefrontal and parietal areas by the amount of computational demand suggests that the collaboration between areas is a basic feature underlying much of the functionality of spatial working memory. Copyright 2000 Academic Press.
An efficient non-dominated sorting method for evolutionary algorithms.
Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F
2008-01-01
We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Thrust Direction Optimization: Satisfying Dawn's Attitude Agility Constraints
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2013-01-01
The science objective of NASA's Dawn Discovery mission is to explore the two largest members of the main asteroid belt, the giant asteroid Vesta and the dwarf planet Ceres. Dawn successfully completed its orbital mission at Vesta. The Dawn spacecraft has complex, difficult to quantify, and in some cases severe limitations on its attitude agility. The low-thrust transfers between science orbits at Vesta required very complex time varying thrust directions due to the strong and complex gravity and various science objectives. Traditional thrust design objectives (like minimum (Delta)V or minimum transfer time) often result in thrust direction time evolutions that can not be accommodated with the attitude control system available on Dawn. This paper presents several new optimal control objectives, collectively called thrust direction optimization that were developed and necessary to successfully navigate Dawn through all orbital transfers at Vesta.
Perspectives on object manipulation and action grammar for percussive actions in primates
Hayashi, Misato
2015-01-01
The skill of object manipulation is a common feature of primates including humans, although there are species-typical patterns of manipulation. Object manipulation can be used as a comparative scale of cognitive development, focusing on its complexity. Nut cracking in chimpanzees has the highest hierarchical complexity of tool use reported in non-human primates. An analysis of the patterns of object manipulation in naive chimpanzees after nut-cracking demonstrations revealed the cause of difficulties in learning nut-cracking behaviour. Various types of behaviours exhibited within a nut-cracking context can be examined in terms of the application of problem-solving strategies, focusing on their basis in causal understanding or insightful intentionality. Captive chimpanzees also exhibit complex forms of combinatory manipulation, which is the precursor of tool use. A new notation system of object manipulation was invented to assess grammatical rules in manipulative actions. The notation system of action grammar enabled direct comparisons to be made between primates including humans in a variety of object-manipulation tasks, including percussive-tool use. PMID:26483528
Perspectives on object manipulation and action grammar for percussive actions in primates.
Hayashi, Misato
2015-11-19
The skill of object manipulation is a common feature of primates including humans, although there are species-typical patterns of manipulation. Object manipulation can be used as a comparative scale of cognitive development, focusing on its complexity. Nut cracking in chimpanzees has the highest hierarchical complexity of tool use reported in non-human primates. An analysis of the patterns of object manipulation in naive chimpanzees after nut-cracking demonstrations revealed the cause of difficulties in learning nut-cracking behaviour. Various types of behaviours exhibited within a nut-cracking context can be examined in terms of the application of problem-solving strategies, focusing on their basis in causal understanding or insightful intentionality. Captive chimpanzees also exhibit complex forms of combinatory manipulation, which is the precursor of tool use. A new notation system of object manipulation was invented to assess grammatical rules in manipulative actions. The notation system of action grammar enabled direct comparisons to be made between primates including humans in a variety of object-manipulation tasks, including percussive-tool use. © 2015 The Author(s).
Cultural differences in the lateral occipital complex while viewing incongruent scenes
Yang, Yung-Jui; Goh, Joshua; Hong, Ying-Yi; Park, Denise C.
2010-01-01
Converging behavioral and neuroimaging evidence indicates that culture influences the processing of complex visual scenes. Whereas Westerners focus on central objects and tend to ignore context, East Asians process scenes more holistically, attending to the context in which objects are embedded. We investigated cultural differences in contextual processing by manipulating the congruence of visual scenes presented in an fMR-adaptation paradigm. We hypothesized that East Asians would show greater adaptation to incongruent scenes, consistent with their tendency to process contextual relationships more extensively than Westerners. Sixteen Americans and 16 native Chinese were scanned while viewing sets of pictures consisting of a focal object superimposed upon a background scene. In half of the pictures objects were paired with congruent backgrounds, and in the other half objects were paired with incongruent backgrounds. We found that within both the right and left lateral occipital complexes, Chinese participants showed significantly greater adaptation to incongruent scenes than to congruent scenes relative to American participants. These results suggest that Chinese were more sensitive to contextual incongruity than were Americans and that they reacted to incongruent object/background pairings by focusing greater attention on the object. PMID:20083532
Novel methods for matter interferometry with nanosized objects
NASA Astrophysics Data System (ADS)
Arndt, Markus
2005-05-01
We discuss the current status and prospects for novel experimental methods for coherence^1,2 and decoherence^3 experiments with large molecules. Quantum interferometry with nanosized objects is interesting for the exploration of the quantum-classical transition. The same experimental setup is also promising for metrology applications and molecular nanolithography. Our coherence experiments with macromolecules employ a Talbot-Lau interferometer. We discuss some modifications to this scheme, which are required to extend it to particles with masses in excess of several thousand mass units. In particular, the detection in all previous interference experiments with large clusters and molecules, was based on either laser ionization^1 (e.g. Fullerenes) or electron impact ionization^2 (e.g. Porphyrins etc.). However, most ionization schemes run into efficiency limits when the mass and complexity of the target particle increases. Here we present experimental results for an interference detector which is truly scalable, i.e. one which will even improve with increasing particle size and complexity. ``Mechanically magnified fluorescence imaging'' (MMFI), combines the high spatial resolution, which is intrinsic to Talbot Lau interferometry with the high detection efficiency of fluorophores adsorbed onto a substrate. In the Talbot Lau setup a molecular interference pattern is revealed by scanning the 3^rd grating across the molecular beam^1. The number of transmitted molecules is a function of the relative position between the mask and the molecular density pattern. Both the particle interference pattern and the mechanical mask structure may be far smaller than any optical resolution limit. After mechanical magnification by an arbitrary factor, in our case a factor 5000, the interference pattern can still be inspected in fluorescence microscopy. The fluorescent molecules are collected on a surface which is scanned collinearly and synchronously behind the 3rd grating. The resulting image of the interference pattern is by far large enough to be easily seen by the unaided eye. High contrast interference fringes could be recorded with dyes molecules. ^1B. Brezger et al. , Phys. Rev. Lett. 88, 100404 (2002). ^2L. Hackermüller et al. Phys. Rev. Lett 91, 90408 (2003). ^3L. Hackermüller et al. Nature 427, 711 (2004).
Cryptic Speciation Patterns in Iranian Rock Lizards Uncovered by Integrative Taxonomy
Ahmadzadeh, Faraham; Flecks, Morris; Carretero, Miguel A.; Mozaffari, Omid; Böhme, Wolfgang; Harris, D. James; Freitas, Susana; Rödder, Dennis
2013-01-01
While traditionally species recognition has been based solely on morphological differences either typological or quantitative, several newly developed methods can be used for a more objective and integrative approach on species delimitation. This may be especially relevant when dealing with cryptic species or species complexes, where high overall resemblance between species is coupled with comparatively high morphological variation within populations. Rock lizards, genus Darevskia, are such an example, as many of its members offer few diagnostic morphological features. Herein, we use a combination of genetic, morphological and ecological criteria to delimit cryptic species within two species complexes, D. chlorogaster and D. defilippii, both distributed in northern Iran. Our analyses are based on molecular information from two nuclear and two mitochondrial genes, morphological data (15 morphometric, 16 meristic and four categorical characters) and eleven newly calculated spatial environmental predictors. The phylogeny inferred for Darevskia confirmed monophyly of each species complex, with each of them comprising several highly divergent clades, especially when compared to other congeners. We identified seven candidate species within each complex, of which three and four species were supported by Bayesian species delimitation within D. chlorogaster and D. defilippii, respectively. Trained with genetically determined clades, Ecological Niche Modeling provided additional support for these cryptic species. Especially those within the D. defilippii-complex exhibit well-differentiated niches. Due to overall morphological resemblance, in a first approach PCA with mixed variables only showed the separation between the two complexes. However, MANCOVA and subsequent Discriminant Analysis performed separately for both complexes allowed for distinction of the species when sample size was large enough, namely within the D. chlorogaster-complex. In conclusion, the results support four new species, which are described herein. PMID:24324611
Improvement of the F-Perceptory Approach Through Management of Fuzzy Complex Geographic Objects
NASA Astrophysics Data System (ADS)
Khalfi, B.; de Runz, C.; Faiz, S.; Akdag, H.
2015-08-01
In the real world, data is imperfect and in various ways such as imprecision, vagueness, uncertainty, ambiguity and inconsistency. For geographic data, the fuzzy aspect is mainly manifested in time, space and the function of objects and is due to a lack of precision. Therefore, the researchers in the domain emphasize the importance of modeling data structures in GIS but also their lack of adaptation to fuzzy data. The F-Perceptory approachh manages the modeling of imperfect geographic information with UML. This management is essential to maintain faithfulness to reality and to better guide the user in his decision-making. However, this approach does not manage fuzzy complex geographic objects. The latter presents a multiple object with similar or different geographic shapes. So, in this paper, we propose to improve the F-Perceptory approach by proposing to handle fuzzy complex geographic objects modeling. In a second step, we propose its transformation to the UML modeling.
Hardman, Kyle O; Cowan, Nelson
2015-03-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli that possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PsycINFO Database Record (c) 2015 APA, all rights reserved.
GBT Reveals Satellite of Milky Way in Retrograde Orbit
NASA Astrophysics Data System (ADS)
2003-05-01
New observations with National Science Foundation's Robert C. Byrd Green Bank Telescope (GBT) suggest that what was once believed to be an intergalactic cloud of unknown distance and significance, is actually a previously unrecognized satellite galaxy of the Milky Way orbiting backward around the Galactic center. Path of Complex H Artist's rendition of the path of satellite galaxy Complex H (in red) in relation to the orbit of the Sun (in yellow) about the center of the Milky Way Galaxy. The outer layers of Complex H are being stripped away by its interaction with the Milky Way. The hydrogen atmosphere (in blue) is shown surrounding the visible portion (in white) of the Galaxy. CREDIT: Lockman, Smiley, Saxton; NRAO/AUI Jay Lockman of the National Radio Astronomy Observatory (NRAO) in Green Bank, West Virginia, discovered that this object, known as "Complex H," is crashing through the outermost parts of the Milky Way from an inclined, retrograde orbit. Lockman's findings will be published in the July 1 issue of the Astrophysical Journal, Letters. "Many astronomers assumed that Complex H was probably a distant neighbor of the Milky Way with some unusual velocity that defied explanation," said Lockman. "Since its motion appeared completely unrelated to Galactic rotation, astronomers simply lumped it in with other high velocity clouds that had strange and unpredictable trajectories." High velocity clouds are essentially what their name implies, fast-moving clouds of predominately neutral atomic hydrogen. They are often found at great distances from the disk of the Milky Way, and may be left over material from the formation of our Galaxy and other galaxies in our Local Group. Over time, these objects can become incorporated into larger galaxies, just as small asteroids left over from the formation of the solar system sometimes collide with the Earth. Earlier studies of Complex H were hindered because the cloud currently is passing almost exactly behind the outer disk of the Galaxy. The intervening dust and gas that reside within the sweeping spiral arms of the Milky Way block any visible light from this object from reaching the Earth. Radio waves, however, which have a much longer wavelength than visible light, are able to pass through the intervening dust and gas. The extreme sensitivity of the recently commissioned GBT allowed Lockman to clearly map the structure of Complex H, revealing a dense core moving on an orbit at a 45-degree angle to the plane of the Milky Way. Additionally, the scientist detected a more diffuse region surrounding the central core. This comparatively rarefied region looks like a tail that is trailing behind the central mass, and is being decelerated by its interaction with the Milky Way. "The GBT was able to show that this object had a diffuse 'tail' trailing behind, with properties quite different from its main body," said Lockman. "The new data are consistent with a model in which this object is a satellite of the Milky Way in an inclined, retrograde orbit, whose outermost layers are currently being stripped away in its encounter with the Galaxy." These results place Complex H in a small club of Galactic satellites whose orbits do not follow the rotation of the rest of the Milky Way. Among the most prominent of these objects are the Magellanic Clouds, which also are being affected by their interaction with the Milky Way, and are shedding their gas in a long stream. Since large galaxies, like the Milky Way, form by devouring smaller galaxies, clusters of stars, and massive clouds of hydrogen, it is not unusual for objects to be pulled into orbit around the Galaxy from directions other than that of Galactic rotation. "Astronomers have seen evidence that this accreting material can come in from wild orbits," said Butler Burton, an astronomer with the NRAO in Charlottesville, Virginia. "The Magellanic clouds are being torn apart from their interaction with the Milky Way, and there are globular clusters rotating the wrong way. There is evidence that stuff was going every-which-way at the beginning of the Galaxy, and Complex H is probably left over from that chaotic period." The new observations place Complex H at approximately 108,000 light-years from the Galactic center, and indicate that it is nearly 33,000 light-years across, containing approximately 6 million solar masses of hydrogen. Radio telescopes, like the GBT, are able to observe these cold, dark clouds of hydrogen because of the natural electromagnetic radiation emitted by neutral atomic hydrogen at radio wavelengths (21 centimeters). Globular clusters, and certain other objects in the extended Galactic halo, can be studied with optical telescopes because the material in them has collapsed to form hot, bright stars. The GBT is the world's largest fully steerable radio telescope. It was commissioned in August of 2000, and continues to be outfitted with the sensitive receivers and components that will allow it to make observations at much higher frequencies. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
Rodo, Christophe; Sargolini, Francesca; Save, Etienne
2017-03-01
The entorhinal-hippocampal circuitry has been suggested to play an important role in episodic memory but the contribution of the entorhinal cortex remains elusive. Predominant theories propose that the medial entorhinal cortex (MEC) processes spatial information whereas the lateral entorhinal cortex (LEC) processes non spatial information. A recent study using an object exploration task has suggested that the involvement of the MEC and LEC spatial and non-spatial information processing could be modulated by the amount of information to be processed, i.e. environmental complexity. To address this hypothesis we used an object exploration task in which rats with excitotoxic lesions of the MEC and LEC had to detect spatial and non-spatial novelty among a set of objects and we varied environmental complexity by decreasing the number of objects or amount of object diversity. Reducing diversity resulted in restored ability to process spatial and non-spatial information in MEC and LEC groups, respectively. Reducing the number of objects yielded restored ability to process non-spatial information in the LEC group but not the ability to process spatial information in the MEC group. The findings indicate that the MEC and LEC are not strictly necessary for spatial and non-spatial processing but that their involvement depends on the complexity of the information to be processed. Copyright © 2016 Elsevier B.V. All rights reserved.
Factors Influencing Pharmacy Students' Attendance Decisions in Large Lectures
Helms, Kristen L.; McDonough, Sharon K.; Breland, Michelle L.
2009-01-01
Objectives To identify reasons for pharmacy student attendance and absenteeism in large lectures and to determine whether certain student characteristics affect student absenteeism. Methods Pharmacy students' reasons to attend and not attend 3 large lecture courses were identified. Using a Web-based survey instrument, second-year pharmacy students were asked to rate to what degree various reasons affected their decision to attend or not attend classes for 3 courses. Bivariate analyses were used to assess the relationships between student characteristics and degree of absenteeism. Results Ninety-eight students (75%) completed the survey instrument. The degree of student absenteeism differed among the 3 courses. Most student demographic characteristics examined were not related to the degree of absenteeism. Different reasons to attend and not to attend class were identified for each of the 3 courses, suggesting that attendance decisions were complex. Conclusions Respondents wanted to take their own notes and the instructor highlighted what was important to know were the top 2 common reasons for pharmacy students to attend classes. Better understanding of factors influencing student absenteeism may help pharmacy educators design effective interventions to facilitate student attendance. PMID:19777098
Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information
Kravitz, Dwight J.; Baker, Chris I.
2013-01-01
Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity. PMID:22473894
Deep Neural Networks as a Computational Model for Human Shape Sensitivity
Op de Beeck, Hans P.
2016-01-01
Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699
Combined mining: discovering informative knowledge in complex data.
Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi
2011-06-01
Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.
Worthington, Thomas A; Brewer, Shannon K; Farless, Nicole; Grabowski, Timothy B; Gregory, Mark S
2014-01-01
Habitat fragmentation and flow regulation are significant factors related to the decline and extinction of freshwater biota. Pelagic-broadcast spawning cyprinids require moving water and some length of unfragmented stream to complete their life cycle. However, it is unknown how discharge and habitat features interact at multiple spatial scales to alter the transport of semi-buoyant fish eggs. Our objective was to assess the relationship between downstream drift of semi-buoyant egg surrogates (gellan beads) and discharge and habitat complexity. We quantified transport time of a known quantity of beads using 2-3 sampling devices at each of seven locations on the North Canadian and Canadian rivers. Transport time was assessed based on median capture time (time at which 50% of beads were captured) and sampling period (time period when 2.5% and 97.5% of beads were captured). Habitat complexity was assessed by calculating width∶depth ratios at each site, and several habitat metrics determined using analyses of aerial photographs. Median time of egg capture was negatively correlated to site discharge. The temporal extent of the sampling period at each site was negatively correlated to both site discharge and habitat-patch dispersion. Our results highlight the role of discharge in driving transport times, but also indicate that higher dispersion of habitat patches relates to increased retention of beads within the river. These results could be used to target restoration activities or prioritize water use to create and maintain habitat complexity within large, fragmented river systems.
FORMATION AND RECONDENSATION OF COMPLEX ORGANIC MOLECULES DURING PROTOSTELLAR LUMINOSITY OUTBURSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taquet, Vianney; Wirström, Eva S.; Charnley, Steven B.
2016-04-10
During the formation of stars, the accretion of surrounding material toward the central object is thought to undergo strong luminosity outbursts followed by long periods of relative quiescence, even at the early stages of star formation when the protostar is still embedded in a large envelope. We investigated the gas-phase formation and recondensation of the complex organic molecules (COMs) di-methyl ether and methyl formate, induced by sudden ice evaporation processes occurring during luminosity outbursts of different amplitudes in protostellar envelopes. For this purpose, we updated a gas-phase chemical network forming COMs in which ammonia plays a key role. The modelmore » calculations presented here demonstrate that ion–molecule reactions alone could account for the observed presence of di-methyl ether and methyl formate in a large fraction of protostellar cores without recourse to grain-surface chemistry, although they depend on uncertain ice abundances and gas-phase reaction branching ratios. In spite of the short outburst timescales of about 100 years, abundance ratios of the considered species higher than 10% with respect to methanol are predicted during outbursts due to their low binding energies relative to water and methanol which delay their recondensation during cooling. Although the current luminosity of most embedded protostars would be too low to produce complex organics in the hot-core regions that are observable with current sub-millimetric interferometers, previous luminosity outburst events would induce the formation of COMs in extended regions of protostellar envelopes with sizes increasing by up to one order of magnitude.« less
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Worthington, Thomas A.; Brewer, Shannon K.; Farless, Nicole; Grabowski, Timothy B.; Gregory, Mark S.
2014-01-01
Habitat fragmentation and flow regulation are significant factors related to the decline and extinction of freshwater biota. Pelagic-broadcast spawning cyprinids require moving water and some length of unfragmented stream to complete their life cycle. However, it is unknown how discharge and habitat features interact at multiple spatial scales to alter the transport of semi-buoyant fish eggs. Our objective was to assess the relationship between downstream drift of semi-buoyant egg surrogates (gellan beads) and discharge and habitat complexity. We quantified transport time of a known quantity of beads using 2–3 sampling devices at each of seven locations on the North Canadian and Canadian rivers. Transport time was assessed based on median capture time (time at which 50% of beads were captured) and sampling period (time period when 2.5% and 97.5% of beads were captured). Habitat complexity was assessed by calculating width:depth ratios at each site, and several habitat metrics determined using analyses of aerial photographs. Median time of egg capture was negatively correlated to site discharge. The temporal extent of the sampling period at each site was negatively correlated to both site discharge and habitat-patch dispersion. Our results highlight the role of discharge in driving transport times, but also indicate that higher dispersion of habitat patches relates to increased retention of beads within the river. These results could be used to target restoration activities or prioritize water use to create and maintain habitat complexity within large, fragmented river systems.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
Heavy ligand atom induced large magnetic anisotropy in Mn(ii) complexes.
Chowdhury, Sabyasachi Roy; Mishra, Sabyashachi
2017-06-28
In the search for single molecule magnets, metal ions are considered pivotal towards achieving large magnetic anisotropy barriers. In this context, the influence of ligands with heavy elements, showing large spin-orbit coupling, on magnetic anisotropy barriers was investigated using a series of Mn(ii)-based complexes, in which the metal ion did not have any orbital contribution. The mixing of metal and ligand orbitals was achieved by explicitly correlating the metal and ligand valence electrons with CASSCF calculations. The CASSCF wave functions were further used for evaluating spin-orbit coupling and zero-field splitting parameters for these complexes. For Mn(ii) complexes with heavy ligand atoms, such as Br and I, several interesting inter-state mixings occur via the spin-orbit operator, which results in large magnetic anisotropy in these Mn(ii) complexes.
Equal-magnitude size-weight illusions experienced within and between object categories.
Buckingham, Gavin; Goodale, Melvyn A; White, Justin A; Westwood, David A
2016-01-01
In the size-weight illusion (SWI), small objects feel heavier than larger objects of the same mass. This effect is typically thought to be a consequence of the lifter's expectation that the large object will outweigh the small object, because objects of the same type typically get heavier as they get larger. Here, we show that this perceptual effect can occur across object category, where there are no strong expectations about the correspondence between size and mass. One group of participants lifted same-colored large and small cubes with the same mass as one another, while another group lifted differently-colored large and small cubes with the same mass as one another. The group who lifted the same-colored cubes experienced a robust SWI and initially lifted the large object with more force than the small object. By contrast, the group who lifted the different-colored objects did so with equal initial forces on the first trial, but experienced just as strong an illusion as those who lifted the same-colored objects. These results demonstrate that color cues can selectively influence the application of fingertip force rates while not impacting at all upon the lifter's perception of object weight, highlighting a stark dissociation in how prior information affects perception and action.
Structural Element Testing in Support of the Design of the NASA Composite Crew Module
NASA Technical Reports Server (NTRS)
Kellas, Sotiris; Jackson, Wade C.; Thesken, John C.; Schleicher, Eric; Wagner, Perry; Kirsch, Michael T.
2012-01-01
In January 2007, the NASA Administrator and Associate Administrator for the Exploration Systems Mission Directorate chartered the NASA Engineering and Safety Center (NESC) to design, build, and test a full-scale Composite Crew Module (CCM). For the design and manufacturing of the CCM, the team adopted the building block approach where design and manufacturing risks were mitigated through manufacturing trials and structural testing at various levels of complexity. Following NASA's Structural Design Verification Requirements, a further objective was the verification of design analysis methods and the provision of design data for critical structural features. Test articles increasing in complexity from basic material characterization coupons through structural feature elements and large structural components, to full-scale structures were evaluated. This paper discusses only four elements tests three of which include joints and one that includes a tapering honeycomb core detail. For each test series included are specimen details, instrumentation, test results, a brief analysis description, test analysis correlation and conclusions.
Real-time prediction of hand trajectory by ensembles of cortical neurons in primates
NASA Astrophysics Data System (ADS)
Wessberg, Johan; Stambaugh, Christopher R.; Kralik, Jerald D.; Beck, Pamela D.; Laubach, Mark; Chapin, John K.; Kim, Jung; Biggs, S. James; Srinivasan, Mandayam A.; Nicolelis, Miguel A. L.
2000-11-01
Signals derived from the rat motor cortex can be used for controlling one-dimensional movements of a robot arm. It remains unknown, however, whether real-time processing of cortical signals can be employed to reproduce, in a robotic device, the kind of complex arm movements used by primates to reach objects in space. Here we recorded the simultaneous activity of large populations of neurons, distributed in the premotor, primary motor and posterior parietal cortical areas, as non-human primates performed two distinct motor tasks. Accurate real-time predictions of one- and three-dimensional arm movement trajectories were obtained by applying both linear and nonlinear algorithms to cortical neuronal ensemble activity recorded from each animal. In addition, cortically derived signals were successfully used for real-time control of robotic devices, both locally and through the Internet. These results suggest that long-term control of complex prosthetic robot arm movements can be achieved by simple real-time transformations of neuronal population signals derived from multiple cortical areas in primates.
4-aminoquinoline analogues and its platinum (II) complexes as antimalarial agents.
de Souza, Nicolli Bellotti; Carmo, Arturene M L; Lagatta, Davi C; Alves, Márcio José Martins; Fontes, Ana Paula Soares; Coimbra, Elaine Soares; da Silva, Adilson David; Abramo, Clarice
2011-07-01
The high incidence of malaria and drug-resistant strains of Plasmodium have turned this disease into a problem of major health importance. One of the approaches used to control it is to search for new antimalarial agents, such as quinoline derivates. This class of compounds composes a broad group of antimalarial agents, which are largely employed, and inhibits the formation of β-haematin (malaria pigment), which is lethal to the parasite. More specifically, 4-aminoquinoline derivates represent potential sources of antimalarials, as the example of chloroquine, the most used antimalarial worldwide. In order to assess antimalarial activity, 12 4-aminoquinoline derived drugs were obtained and some of these derivatives were used to obtain platinum complexes platinum (II). These compounds were tested in vivo in a murine model and revealed remarkable inhibition of parasite multiplication values, whose majority ranged from 50 to 80%. In addition they were not cytotoxic. Thus, they may be object of further research for new antimalarial agents. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Obracaj, Piotr; Fabianowski, Dariusz
2017-10-01
Implementations concerning adaptation of historic facilities for public utility objects are associated with the necessity of solving many complex, often conflicting expectations of future users. This mainly concerns the function that includes construction, technology and aesthetic issues. The list of issues is completed with proper protection of historic values, different in each case. The procedure leading to obtaining the expected solution is a multicriteria procedure, usually difficult to accurately define and requiring designer’s large experience. An innovative approach has been used for the analysis, namely - the modified EA FAHP (Extent Analysis Fuzzy Analytic Hierarchy Process) Chang’s method of a multicriteria analysis for the assessment of complex functional and spatial issues. Selection of optimal spatial form of an adapted historic building intended for the multi-functional public utility facility was analysed. The assumed functional flexibility was determined in the scope of: education, conference, and chamber spectacles, such as drama, concerts, in different stage-audience layouts.
Position Paper: Designing Complex Systems to Support Interdisciplinary Cognitive Work
NASA Technical Reports Server (NTRS)
Greene, Melissa T.; Papalambros, Panos Y.; Mcgowan, Anna-Maria R.
2016-01-01
The paper argues that the field we can call cognitive science of interdisciplinary collaboration is an important area of study for improving design of Large-Scale Complex Systems (LaCES) and supporting cognitive work. The paper mostly raised questions that have been documented in earlier qualitative analysis studies, and provided possible avenues of exploration for addressing them. There are likely further contributions from additional disciplines beyond those mentioned in this paper that should be considered and integrated into such a cognitive science framework. Knowledge and awareness of various perspectives will help to inform the types of interventions available for improving LaCES design and functionality. For example, a cognitive interpretation of interdisciplinary collaborations in LaCES elucidated the need for a "translator" or "mediator" in helping subject matter experts to transcend language boundaries, mitigate single discipline bias, support integrative activities, and correct misaligned objectives. Additional research in this direction is likely to uncover similar gaps and opportunities for improvements in practice.
Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1999-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.