Sample records for objects including input

  1. Dynamic Organization of Hierarchical Memories

    PubMed Central

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2016-01-01

    In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity. PMID:27618549

  2. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  3. XBox Input -Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-10-03

    Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: • Robot arms - Provides joint control and several tool control schemes • UGV's - Provides translational and rotational commands for "skid-steer" vehicles • Pan-tilt units - Provides several modes of control including velocity, position, and point-tracking • Head-mounted displays (HMO)- Controls the viewpoint of a HMO • Umbra frames - Controls the position andorientation of an Umbra posrot objectmore » • Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.« less

  4. Input Range Testing for the General Mission Analysis Tool (GMAT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    This document contains a test plan for testing input values to the General Mission Analysis Tool (GMAT). The plan includes four primary types of information, which rigorously define all tests that should be performed to validate that GMAT will accept allowable inputs and deny disallowed inputs. The first is a complete list of all allowed object fields in GMAT. The second type of information, is test input to be attempted for each field. The third type of information is allowable input values for all objects fields in GMAT. The final piece of information is how GMAT should respond to both valid and invalid information. It is VERY important to note that the tests below must be performed for both the Graphical User Interface and the script!! The examples are illustrated using a scripting perspective, because it is simpler to write up. However, the test must be performed for both interfaces to GMAT.

  5. Hierarchical Robot Control System and Method for Controlling Select Degrees of Freedom of an Object Using Multiple Manipulators

    NASA Technical Reports Server (NTRS)

    Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor)

    2013-01-01

    A robotic system includes a robot having manipulators for grasping an object using one of a plurality of grasp types during a primary task, and a controller. The controller controls the manipulators during the primary task using a multiple-task control hierarchy, and automatically parameterizes the internal forces of the system for each grasp type in response to an input signal. The primary task is defined at an object-level of control, e.g., using a closed-chain transformation, such that only select degrees of freedom are commanded for the object. A control system for the robotic system has a host machine and algorithm for controlling the manipulators using the above hierarchy. A method for controlling the system includes receiving and processing the input signal using the host machine, including defining the primary task at the object-level of control, e.g., using a closed-chain definition, and parameterizing the internal forces for each of grasp type.

  6. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  7. Sound effects: Multimodal input helps infants find displaced objects.

    PubMed

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.

  8. Universal modal radiation laws for all thermal emitters

    PubMed Central

    Zhu, Linxiao; Fan, Shanhui

    2017-01-01

    We derive four laws relating the absorptivity and emissivity of thermal emitters. Unlike the original Kirchhoff radiation law derivations, these derivations include diffraction, and so are valid also for small objects, and can also cover nonreciprocal objects. The proofs exploit two recent approaches. First, we express all fields in terms of the mode-converter basis sets of beams; these sets, which can be uniquely established for any linear optical object, give orthogonal input beams that are coupled one-by-one to orthogonal output beams. Second, we consider thought experiments using universal linear optical machines, which allow us to couple appropriate beams and black bodies. Two of these laws can be regarded as rigorous extensions of previously known laws: One gives a modal version of a radiation law for reciprocal objects—the absorptivity of any input beam equals the emissivity into the “backward” (i.e., phase-conjugated) version of that beam; another gives the overall equality of the sums of the emissivities and the absorptivities for any object, including nonreciprocal ones. The other two laws, valid for reciprocal and nonreciprocal objects, are quite different from previous relations. One shows universal equivalence of the absorptivity of each mode-converter input beam and the emissivity into its corresponding scattered output beam. The other gives unexpected equivalences of absorptivity and emissivity for broad classes of beams. Additionally, we prove these orthogonal mode-converter sets of input and output beams are the ones that maximize absorptivities and emissivities, respectively, giving these beams surprising additional physical meaning. PMID:28396436

  9. High force vibration testing with wide frequency range

    DOEpatents

    Romero, Edward F.; Jepsen, Richard A.; Gregory, Danny Lynn

    2013-04-02

    A shaker assembly for vibration testing includes first and second shakers, where the first shaker includes a piezo-electric material for generating vibration. A support structure permits a test object to be supported for vibration of the test object by both shakers. An input permits an external vibration controller to control vibration of the shakers.

  10. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  11. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  12. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs

  13. Objective Classification of Radar Profile Types, and Their Relationship to Lightning Occurrence

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis

    2003-01-01

    A cluster analysis technique is used to identify 16 "archetypal" vertical radar profile types from a large, globally representative sample of profiles from the TRMM Precipitation Radar. These include nine convective types (7 of these deep convective) and seven stratiform types (5 of these clearly glaciated). Radar profile classification provides an alternative to conventional deep convective storm metrics, such as 30 dBZ echo height, maximum reflectivity or VIL. As expected, the global frequency of occurrence of deep convective profile types matches satellite-observed total lightning production, including to very small scall local features. Each location's "mix" of profile types provides an objective description of the local convective spectrum, and in turn, is a first step in objectively classifying convective regimes. These classifiers are tested as inputs to a neural network which attempts to predict lightning occurrence based on radar-only storm observations, and performance is compared with networks using traditional radar metrics as inputs.

  14. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  15. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

  16. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  17. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  18. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  19. Motor–sensory convergence in object localization: a comparative study in rats and humans

    PubMed Central

    Horev, Guy; Saig, Avraham; Knutsen, Per Magne; Pietr, Maciej; Yu, Chunxiu; Ahissar, Ehud

    2011-01-01

    In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs (‘temporal inputs’) from early whisk cycles with spatially related inputs (‘spatial inputs’) to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor–sensory process gradually converges on a stable percept of object location in both species. PMID:21969688

  20. Feature space trajectory for distorted-object classification and pose estimation in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Shenoy, Rajesh

    1997-10-01

    Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.

  1. Design of a data-driven predictive controller for start-up process of AMT vehicles.

    PubMed

    Lu, Xiaohui; Chen, Hong; Wang, Ping; Gao, Bingzhao

    2011-12-01

    In this paper, a data-driven predictive controller is designed for the start-up process of vehicles with automated manual transmissions (AMTs). It is obtained directly from the input-output data of a driveline simulation model constructed by the commercial software AMESim. In order to obtain offset-free control for the reference input, the predictor equation is gained with incremental inputs and outputs. Because of the physical characteristics, the input and output constraints are considered explicitly in the problem formulation. The contradictory requirements of less friction losses and less driveline shock are included in the objective function. The designed controller is tested under nominal conditions and changed conditions. The simulation results show that, during the start-up process, the AMT clutch with the proposed controller works very well, and the process meets the control objectives: fast clutch lockup time, small friction losses, and the preservation of driver comfort, i.e., smooth acceleration of the vehicle. At the same time, the closed-loop system has the ability to reject uncertainties, such as the vehicle mass and road grade.

  2. Self-amplified optical pattern recognition system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1994-01-01

    A self amplifying optical pattern recognizer includes a geometric system configuration similar to that of a Vander Lugt holographic matched filter configuration with a photorefractive crystal specifically oriented with respect to the input beams. An extraordinarily polarized, spherically converging object image beam is formed by laser illumination of an input object image and applied through a photorefractive crystal, such as a barium titanite (BaTiO.sub.3) crystal. A volume or thin-film dif ORIGIN OF THE INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which the Contractor has elected to retain title.

  3. The American public's objectives and beliefs regarding forests and grasslands: 2004 survey results

    Treesearch

    Lori B. Shelby; Deborah J. Shields; Donna L. Lybecker; Michael D. Miller; Brian M. Kent; Vesna Bashovska

    2008-01-01

    The USDA Forest Service revises its Strategic Plan according to the 1993 Government Performance and Results Act (Public Law 103-62). The goals and objectives included in the Strategic Plan are developed from natural resource trend data (Forest and Rangeland Renewable Resources Planning Act) and public input such as the results from this survey. The purpose of this...

  4. Development of a database for Louisiana highway bridge scour data : technical summary.

    DOT National Transportation Integrated Search

    1999-10-01

    The objectives of the project included: 1) developed a database with manipulation capabilities such as data retrieval, visualization, and update; 2) Input the existing scour data from DOTD files into the database.

  5. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  6. Survey results of the American public's values, objectives, beliefs, and attitudes regarding forests and grasslands: A technical document supporting the 2000 USDA Forest Service RPA Assessment

    Treesearch

    Deborah J. Shields; Ingrid M. Martin; Wade E. Martin; Michelle A. Haefele

    2002-01-01

    The USDA Forest Service completed its Strategic Plan (2000 Revision) in October 2000. The goals and objectives included in the Plan were developed with input from the public, some of which was obtained through a telephone survey. We report results of the survey. Members of the American public were asked about their values with respect to public lands, objectives for...

  7. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    PubMed

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  8. The Northeastern area's objectives and beliefs responses regarding forests and grasslands: 2004 survey results

    Treesearch

    Lori B. Shelby; Deborah J. Shields; Michael D. Miller; Donna L. Lybecker; Brian M. Kent; Vesna Bashovska

    2009-01-01

    The USDA Forest Service revises its Strategic Plan according to the 1993 Government Performance and Results Act. The goals and objectives included in the Strategic Plan are developed from natural resource trend data (Forest and Rangeland Renewable Planning Act) and from public input such as the results from this telephone survey. The purpose of this report is to...

  9. Pure JavaScript Storyline Layout Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is a JavaScript library for a storyline layout algorithm. Storylines are adept at communicating complex change by encoding time on the x-axis and using the proximity of lines in the y direction to represent interaction between entities. The library in this disclosure takes as input a list of objects containing an id, time, and state. The output is a data structure that can be used to conveniently render a storyline visualization. Most importantly, the library computes the y-coordinate of the entities over time that decreases layout artifacts including crossings, wiggles, and whitespace. This is accomplished through multi-objective, multi-stage optimizationmore » problem, where the output of one stage produces input and constraints for the next stage.« less

  10. Simulation of Physical Experiments in Immersive Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Wasfy, Tamer M.

    2001-01-01

    An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.

  11. Method and apparatus for smart battery charging including a plurality of controllers each monitoring input variables

    DOEpatents

    Hammerstrom, Donald J.

    2013-10-15

    A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.

  12. Exploiting core knowledge for visual object recognition.

    PubMed

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Cerebellar input configuration toward object model abstraction in manipulation tasks.

    PubMed

    Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo

    2011-08-01

    It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

  14. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  15. Dual physiological rate measurement instrument

    NASA Technical Reports Server (NTRS)

    Cooper, Tommy G. (Inventor)

    1990-01-01

    The object of the invention is to provide an instrument for converting a physiological pulse rate into a corresponding linear output voltage. The instrument which accurately measures the rate of an unknown rectangular pulse wave over an extended range of values comprises a phase-locked loop including a phase comparator, a filtering network, and a voltage-controlled oscillator, arranged in cascade. The phase comparator has a first input responsive to the pulse wave and a second input responsive to the output signal of the voltage-controlled oscillator. The comparator provides a signal dependent on the difference in phase and frequency between the signals appearing on the first and second inputs. A high-input impedance amplifier accepts an output from the filtering network and provides an amplified output DC signal to a utilization device for providing a measurement of the rate of the pulse wave.

  16. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  17. Input design for identification of aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Hall, W. E., Jr.

    1975-01-01

    An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.

  18. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  19. Model predictive control system and method for integrated gasification combined cycle power generation

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa

    2013-04-09

    Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.

  20. EChem++--an object-oriented problem solving environment for electrochemistry. 2. The kinetic facilities of Ecco--a compiler for (electro-)chemistry.

    PubMed

    Ludwig, Kai; Speiser, Bernd

    2004-01-01

    We describe a modeling software component Ecco, implemented in the C++ programming language. It assists in the formulation of physicochemical systems including, in particular, electrochemical processes within general geometries. Ecco's kinetic part then translates any user defined reaction mechanism into an object-oriented representation and generates the according mathematical model equations. The input language, its grammar, the object-oriented design of Ecco, based on design patterns, and its integration into the open source software project EChem++ are discussed. Application Strategies are given.

  1. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    PubMed

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  2. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  3. System and Method for Modeling the Flow Performance Features of an Object

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles (Inventor); Ross, James (Inventor)

    1997-01-01

    The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.

  4. Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects.

    PubMed

    Devereux, Barry J; Clarke, Alex; Marouchos, Andreas; Tyler, Lorraine K

    2013-11-27

    Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.

  5. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  6. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  7. Cortical systems mediating visual attention to both objects and spatial locations

    PubMed Central

    Shomstein, Sarah; Behrmann, Marlene

    2006-01-01

    Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559

  8. The Snow Data System at NASA JPL

    NASA Astrophysics Data System (ADS)

    Horn, J.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Joyce, M.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.

    2017-12-01

    The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.

  9. The Snow Data System at NASA JPL

    NASA Astrophysics Data System (ADS)

    Joyce, M.; Laidlaw, R.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.

    2016-12-01

    The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.

  10. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  11. Secure content objects

    DOEpatents

    Evans, William D [Cupertino, CA

    2009-02-24

    A secure content object protects electronic documents from unauthorized use. The secure content object includes an encrypted electronic document, a multi-key encryption table having at least one multi-key component, an encrypted header and a user interface device. The encrypted document is encrypted using a document encryption key associated with a multi-key encryption method. The encrypted header includes an encryption marker formed by a random number followed by a derivable variation of the same random number. The user interface device enables a user to input a user authorization. The user authorization is combined with each of the multi-key components in the multi-key encryption key table and used to try to decrypt the encrypted header. If the encryption marker is successfully decrypted, the electronic document may be decrypted. Multiple electronic documents or a document and annotations may be protected by the secure content object.

  12. Patient and family involvement in contemporary health care.

    PubMed

    Angood, Peter; Dingman, Jennifer; Foley, Mary E; Ford, Dan; Martins, Becky; O'Regan, Patti; Salamendra, Arlene; Sheridan, Sue; Denham, Charles R

    2010-03-01

    The objective of this article was to provide a guide to health care providers on patient and family involvement in health care. This article evaluated the latest published studies for patient and family involvement and reexamined the objectives, the requirements for achieving these objectives, and the evidence of how to involve patients and families. Critical components for patient safety include changing the organizational culture; including patients and families on teams; listening to patients and families; incorporating their input into leadership structures and systems; providing full detail about treatment, procedures, and medication adverse effects; involving them on patient safety and performance improvement committees; and disclosing medical errors. The conclusion of this article is that, for the future, patient and family involvement starts with educating patients and families and ends with listening to them and taking them seriously. If patient and family input is emphatically built into systems of performance improvement, and if patients and families are taken seriously and are respected for their valuable perspectives about how care can be improved, then organizations can improve at improving. Resources in health care are in short supply, yet the resources of patient and family help and time are almost limitless, are ready to be tapped, and can have a huge impact on improving the reliability and overall success for any health care organization.

  13. 4th National Climate Assessment: Public Webinar for Air Quality Chapter

    EPA Science Inventory

    On May 8, 2017, the NCA4 Air Quality chapter team held a public engagement webinar. The objectives of the webinar were to gather input from stakeholders, including authors of the regional chapters, to help inform the writing and development of NCA4, and to raise awareness of the ...

  14. Representational Similarity Analysis Reveals Commonalities and Differences in the Semantic Processing of Words and Objects

    PubMed Central

    Devereux, Barry J.; Clarke, Alex; Marouchos, Andreas; Tyler, Lorraine K.

    2013-01-01

    Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects. PMID:24285896

  15. Optimization of autoregressive, exogenous inputs-based typhoon inundation forecasting models using a multi-objective genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ouyang, Huei-Tau

    2017-07-01

    Three types of model for forecasting inundation levels during typhoons were optimized: the linear autoregressive model with exogenous inputs (LARX), the nonlinear autoregressive model with exogenous inputs with wavelet function (NLARX-W) and the nonlinear autoregressive model with exogenous inputs with sigmoid function (NLARX-S). The forecast performance was evaluated by three indices: coefficient of efficiency, error in peak water level and relative time shift. Historical typhoon data were used to establish water-level forecasting models that satisfy all three objectives. A multi-objective genetic algorithm was employed to search for the Pareto-optimal model set that satisfies all three objectives and select the ideal models for the three indices. Findings showed that the optimized nonlinear models (NLARX-W and NLARX-S) outperformed the linear model (LARX). Among the nonlinear models, the optimized NLARX-W model achieved a more balanced performance on the three indices than the NLARX-S models and is recommended for inundation forecasting during typhoons.

  16. Assessing risk based on uncertain avalanche activity patterns

    NASA Astrophysics Data System (ADS)

    Zeidler, Antonia; Fromm, Reinhard

    2015-04-01

    Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.

  17. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  18. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  19. Multi-criteria evaluation of wastewater treatment plant control strategies under uncertainty.

    PubMed

    Flores-Alsina, Xavier; Rodríguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2008-11-01

    The evaluation of activated sludge control strategies in wastewater treatment plants (WWTP) via mathematical modelling is a complex activity because several objectives; e.g. economic, environmental, technical and legal; must be taken into account at the same time, i.e. the evaluation of the alternatives is a multi-criteria problem. Activated sludge models are not well characterized and some of the parameters can present uncertainty, e.g. the influent fractions arriving to the facility and the effect of either temperature or toxic compounds on the kinetic parameters, having a strong influence in the model predictions used during the evaluation of the alternatives and affecting the resulting rank of preferences. Using a simplified version of the IWA Benchmark Simulation Model No. 2 as a case study, this article shows the variations in the decision making when the uncertainty in activated sludge model (ASM) parameters is either included or not during the evaluation of WWTP control strategies. This paper comprises two main sections. Firstly, there is the evaluation of six WWTP control strategies using multi-criteria decision analysis setting the ASM parameters at their default value. In the following section, the uncertainty is introduced, i.e. input uncertainty, which is characterized by probability distribution functions based on the available process knowledge. Next, Monte Carlo simulations are run to propagate input through the model and affect the different outcomes. Thus (i) the variation in the overall degree of satisfaction of the control objectives for the generated WWTP control strategies is quantified, (ii) the contributions of environmental, legal, technical and economic objectives to the existing variance are identified and finally (iii) the influence of the relative importance of the control objectives during the selection of alternatives is analyzed. The results show that the control strategies with an external carbon source reduce the output uncertainty in the criteria used to quantify the degree of satisfaction of environmental, technical and legal objectives, but increasing the economical costs and their variability as a trade-off. Also, it is shown how a preliminary selected alternative with cascade ammonium controller becomes less desirable when input uncertainty is included, having simpler alternatives more chance of success.

  20. The design of an intelligent human-computer interface for the test, control and monitor system

    NASA Technical Reports Server (NTRS)

    Shoaff, William D.

    1988-01-01

    The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.

  1. Universal Approximation by Using the Correntropy Objective Function.

    PubMed

    Nayyeri, Mojtaba; Sadoghi Yazdi, Hadi; Maskooki, Alaleh; Rouhani, Modjtaba

    2017-10-16

    Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space. Thus, the convergence is guaranteed. The performance of our method was compared with that of eight different objective functions, as well as with an existing one hidden layer feedforward network on several real regression data sets with and without impulsive noise. The experimental results indicate the benefits of using a correntropy measure in reducing the root mean square error and increasing the robustness to noise.

  2. The Flow Engine Framework: A Cognitive Model of Optimal Human Experience

    PubMed Central

    Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie

    2018-01-01

    Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807

  3. A Within-subjects Experimental Protocol to Assess the Effects of Social Input on Infant EEG.

    PubMed

    St John, Ashley M; Kao, Katie; Chita-Tegmark, Meia; Liederman, Jacqueline; Grieve, Philip G; Tarullo, Amanda R

    2017-05-03

    Despite the importance of social interactions for infant brain development, little research has assessed functional neural activation while infants socially interact. Electroencephalography (EEG) power is an advantageous technique to assess infant functional neural activation. However, many studies record infant EEG only during one baseline condition. This protocol describes a paradigm that is designed to comprehensively assess infant EEG activity in both social and nonsocial contexts as well as tease apart how different types of social inputs differentially relate to infant EEG. The within-subjects paradigm includes four controlled conditions. In the nonsocial condition, infants view objects on computer screens. The joint attention condition involves an experimenter directing the infant's attention to pictures. The joint attention condition includes three types of social input: language, face-to-face interaction, and the presence of joint attention. Differences in infant EEG between the nonsocial and joint attention conditions could be due to any of these three types of input. Therefore, two additional conditions (one with language input while the experimenter is hidden behind a screen and one with face-to-face interaction) were included to assess the driving contextual factors in patterns of infant neural activation. Representative results demonstrate that infant EEG power varied by condition, both overall and differentially by brain region, supporting the functional nature of infant EEG power. This technique is advantageous in that it includes conditions that are clearly social or nonsocial and allows for examination of how specific types of social input relate to EEG power. This paradigm can be used to assess how individual differences in age, affect, socioeconomic status, and parent-infant interaction quality relate to the development of the social brain. Based on the demonstrated functional nature of infant EEG power, future studies should consider the role of EEG recording context and design conditions that are clearly social or nonsocial.

  4. Land Application of Wastes: An Educational Program. Nitrogen Considerations - Module 15, Objectives, Script and Booklet.

    ERIC Educational Resources Information Center

    Clarkson, W. W.; And Others

    This module expands on the introductory discussion of nitrogen in other modules. The various chemical forms of nitrogen found in land treatment systems are defined. Inputs from waste application as well as natural sources are quantified for typical situations. A discussion of nitrogen transformations in the soil includes mineralization and…

  5. A 3D stand generator for central Appalachian hardwood forests

    Treesearch

    Jingxin Wang; Yaoxiang Li; Gary W. Miller

    2002-01-01

    A 3-dimensional (3D) stand generator was developed for central Appalachian hardwood forests. It was designed for a harvesting simulator to examine the interactions of stand, harvest, and machine. The Component Object Model (COM) was used to design and implement the program. Input to the generator includes species composition, stand density, and spatial pattern. Output...

  6. Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera

    NASA Astrophysics Data System (ADS)

    Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.

    2017-09-01

    Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.

  7. Unsupervised segmentation with dynamical units.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R

    2008-01-01

    In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.

  8. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  9. Stochastic multi-objective auto-optimization for resource allocation decision-making in fixed-input health systems.

    PubMed

    Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C

    2017-06-01

    The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.

  10. Object tracking using plenoptic image sequences

    NASA Astrophysics Data System (ADS)

    Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung

    2017-05-01

    Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.

  11. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe370mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  12. Broadband Heating Rate Profile Project (BBHRP) - SGP 1bbhrpripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  13. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  14. Fuzzy set methods for object recognition in space applications

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1991-01-01

    During the reporting period, the development of the theory and application of methodologies for decision making under uncertainty was addressed. Two subreports are included; the first on properties of general hybrid operators, while the second considers some new research on generalized threshold logic units. In the first part, the properties of the additive gamma-model, where the intersection part is first considered to be the product of the input values and the union part is obtained by an extension of De Morgan's law to fuzzy sets, is explored. Then the Yager's class of union and intersection is used in the additive gamma-model. The inputs are weighted to some power that represents their importance and thus their contribution to the compensation process. In the second part, the extension of binary logic synthesis methods to multiple valued logic synthesis methods to enable the synthesis of decision networks when the input/output variables are not binary is discussed.

  15. Terrestrial cross-calibrated assimilation of various datasources

    NASA Astrophysics Data System (ADS)

    Groß, André; Müller, Richard; Schömer, Elmar; Trentmann, Jörg

    2014-05-01

    We introduce a novel software tool, ANACLIM, for the efficient assimilation of multiple two-dimensional data sets using a variational approach. We consider a single objective function in two spatial coordinates with higher derivatives. This function measures the deviation of the input data from the target data set. By using the Euler-Lagrange formalism the minimization of this objective function can be transformed into a sparse system of linear equations, which can be efficiently solved by a conjugate gradient solver on a desktop workstation. The objective function allows for a series of physically-motivated constraints. The user can control the relative global weights, as well as the individual weight of each constraint on a per-grid-point level. The different constraints are realized as separate terms of the objective function: One similarity term for each input data set and two additional smoothness terms, penalizing high gradient and curvature values. ANACLIM is designed to combine similarity and smoothness operators easily and to choose different solvers. We performed a series of benchmarks to calibrate and verify our solution. We use, for example, terrestrial stations of BSRN and GEBA for the solar incoming flux and AERONET stations for aerosol optical depth. First results show that the combination of these data sources gain a significant benefit against the input datasets with our approach. ANACLIM also includes a region growing algorithm for the assimilation of ground based data. The region growing algorithm computes the maximum area around a station that represents the station data. The regions are grown under several constraints like the homogeneity of the area. The resulting dataset is then used within the assimilation process. Verification is performed by cross-validation. The method and validation results will be presented and discussed.

  16. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    PubMed Central

    Schendan, Haline E.; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701

  17. Preventing Shoulder-Surfing Attack with the Concept of Concealing the Password Objects' Information

    PubMed Central

    Ho, Peng Foong; Kam, Yvonne Hwei-Syn; Wee, Mee Chin

    2014-01-01

    Traditionally, picture-based password systems employ password objects (pictures/icons/symbols) as input during an authentication session, thus making them vulnerable to “shoulder-surfing” attack because the visual interface by function is easily observed by others. Recent software-based approaches attempt to minimize this threat by requiring users to enter their passwords indirectly by performing certain mental tasks to derive the indirect password, thus concealing the user's actual password. However, weaknesses in the positioning of distracter and password objects introduce usability and security issues. In this paper, a new method, which conceals information about the password objects as much as possible, is proposed. Besides concealing the password objects and the number of password objects, the proposed method allows both password and distracter objects to be used as the challenge set's input. The correctly entered password appears to be random and can only be derived with the knowledge of the full set of password objects. Therefore, it would be difficult for a shoulder-surfing adversary to identify the user's actual password. Simulation results indicate that the correct input object and its location are random for each challenge set, thus preventing frequency of occurrence analysis attack. User study results show that the proposed method is able to prevent shoulder-surfing attack. PMID:24991649

  18. Object knowledge changes visual appearance: semantic effects on color afterimages.

    PubMed

    Lupyan, Gary

    2015-10-01

    According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  20. Direction of Magnetoencephalography Sources Associated with Feedback and Feedforward Contributions in a Visual Object Recognition Task

    PubMed Central

    Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe

    2014-01-01

    Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356

  1. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  2. The Effects of Increasing Object Pronoun Input Frequency on the Aural Comprehension of 3rd Person Object Pronouns among Second Semester Classroom Learners of French

    ERIC Educational Resources Information Center

    Barone, Olivia L.

    2017-01-01

    This semester-long study was designed to benefit developing Second Language Acquisition (SLA) instructional methods, specifically honing French language instruction, creating a foundation on which to explore the connection between input frequency during instruction and aural comprehension of difficultly acquired forms. Concurrently, five current…

  3. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  4. Distributed Energy Resources Customer Adoption Model - Graphical User Interface, Version 2.1.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewald, Friedrich; Stadler, Michael; Cardoso, Goncalo F

    The DER-CAM Graphical User Interface has been redesigned to consist of a dynamic tree structure on the left side of the application window to allow users to quickly navigate between different data categories and views. Views can either be tables with model parameters and input data, the optimization results, or a graphical interface to draw circuit topology and visualize investment results. The model parameters and input data consist of tables where values are assigned to specific keys. The aggregation of all model parameters and input data amounts to the data required to build a DER-CAM model, and is passed tomore » the GAMS solver when users initiate the DER-CAM optimization process. Passing data to the GAMS solver relies on the use of a Java server that handles DER-CAM requests, queuing, and results delivery. This component of the DER-CAM GUI can be deployed either locally or remotely, and constitutes an intermediate step between the user data input and manipulation, and the execution of a DER-CAM optimization in the GAMS engine. The results view shows the results of the DER-CAM optimization and distinguishes between a single and a multi-objective process. The single optimization runs the DER-CAM optimization once and presents the results as a combination of summary charts and hourly dispatch profiles. The multi-objective optimization process consists of a sequence of runs initiated by the GUI, including: 1) CO2 minimization, 2) cost minimization, 3) a user defined number of points in-between objectives 1) and 2). The multi-objective results view includes both access to the detailed results of each point generated by the process as well as the generation of a Pareto Frontier graph to illustrate the trade-off between objectives. DER-CAM GUI 2.1.8 also introduces the ability to graphically generate circuit topologies, enabling support to DER-CAM 5.0.0. This feature consists of: 1) The drawing area, where users can manually create nodes and define their properties (e.g. point of common coupling, slack bus, load) and connect them through edges representing either power lines, transformers, or heat pipes, all with user defined characteristics (e.g., length, ampacity, inductance, or heat loss); 2) The tables, which display the user-defined topology in the final numerical form that will be passed to the DER-CAM optimization. Finally, the DER-CAM GUI is also deployed with a database schema that allows users to provide different energy load profiles, solar irradiance profiles, and tariff data, that can be stored locally and later used in any DER-CAM model. However, no real data will be delivered with this version.« less

  5. Atmospheric deposition of nitrogen and sulfur and preferential canopy consumption of nitrate in forests of the Pacific Northwest, USA

    Treesearch

    Mark E. Fenn; Christopher S. Ross; Susan L. Schilling; William D. Baccus; Michael A. Larrabee; Rebecca A. Lofgren

    2013-01-01

    Wet,dry and throughfall deposition of N and S were measured for 2 years in three national parks in Washington State:Olympic,Mount Rainier,and North Cascades.Throughfall was measured using ion exchange resin (IER) collectors. A major objective of the study was to evaluate the effectiveness of IER throughfall measurements for monitoring deposition inputs,including...

  6. Strategic Planning for Academic Administrators; Panning in a College of Business: The Case of Nikita College of Business

    ERIC Educational Resources Information Center

    Simyar, Farhad; Osuji, Louis

    2015-01-01

    In the face of stiff completion for scarce funds to effectively navigate the affairs of business schools, college deans have to come up with strategic plans to ensure that various opinions and inputs of stake holders including faculty and staff are accommodated. Additionally, such deans are expected to come up with goals and objectives designed to…

  7. Robot Vision

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  8. RET selection on state-of-the-art NAND flash

    NASA Astrophysics Data System (ADS)

    Lafferty, Neal V.; He, Yuan; Pei, Jinhua; Shao, Feng; Liu, QingWei; Shi, Xuelong

    2015-03-01

    We present results generated using a new gauge-based Resolution Enhancement Technique (RET) Selection flow during the technology set up phase of a 3x-node NAND Flash product. As a testcase, we consider a challenging critical level for this ash product. The RET solutions include inverse lithography technology (ILT) optimized masks with sub-resolution assist features (SRAF) and companion illumination sources developed using a new pixel based Source Mask Optimization (SMO) tool that uses measurement gauges as a primary input. The flow includes verification objectives which allow tolerancing of particular measurement gauges based on lithographic criteria. Relative importance for particular gauges may also be set, to aid in down-selection from several candidate sources. The end result is a sensitive, objective score of RET performance. Using these custom-defined importance metrics, decisions on the final RET style can be made in an objective way.

  9. Fast Grasp Contact Computation for a Serial Robot

    NASA Technical Reports Server (NTRS)

    Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)

    2015-01-01

    A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.

  10. IREP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busby, Lee

    IREP reads external program input using the Lua C Library, organizes the input into native language structures, and shares those structures among compiled program objects written in either (or both) C/C++ or Fortran

  11. The Snow Data System at NASA JPL

    NASA Astrophysics Data System (ADS)

    Laidlaw, R.; Painter, T. H.; Mattmann, C. A.; Ramirez, P.; Brodzik, M. J.; Rittger, K.; Bormann, K. J.; Burgess, A. B.; Zimdars, P.; McGibbney, L. J.; Goodale, C. E.; Joyce, M.

    2015-12-01

    The Snow Data System at NASA JPL includes a data processing pipeline built with open source software, Apache 'Object Oriented Data Technology' (OODT). It produces a variety of data products using inputs from satellites such as MODIS, VIIRS and Landsat. Processing is carried out in parallel across a high-powered computing cluster. Algorithms such as 'Snow Covered Area and Grain-size' (SCAG) and 'Dust Radiative Forcing in Snow' (DRFS) are applied to satellite inputs to produce output images that are used by many scientists and institutions around the world. This poster will describe the Snow Data System, its outputs and their uses and applications, along with recent advancements to the system and plans for the future. Advancements for 2015 include automated daily processing of historic MODIS data for SCAG (MODSCAG) and DRFS (MODDRFS), automation of SCAG processing for VIIRS satellite inputs (VIIRSCAG) and an updated version of SCAG for Landsat Thematic Mapper inputs (TMSCAG) that takes advantage of Graphics Processing Units (GPUs) for faster processing speeds. The pipeline has been upgraded to use the latest version of OODT and its workflows have been streamlined to enable computer operators to process data on demand. Additional products have been added, such as rolling 8-day composites of MODSCAG data, a new version of the MODSCAG 'annual minimum ice and snow extent' (MODICE) product, and recoded MODSCAG data for the 'Satellite Snow Product Intercomparison and Evaluation Experiment' (SnowPEx) project.

  12. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  13. Asymmetry of Neuronal Combinatorial Codes Arises from Minimizing Synaptic Weight Change.

    PubMed

    Leibold, Christian; Monsalve-Mercado, Mauro M

    2016-08-01

    Synaptic change is a costly resource, particularly for brain structures that have a high demand of synaptic plasticity. For example, building memories of object positions requires efficient use of plasticity resources since objects can easily change their location in space and yet we can memorize object locations. But how should a neural circuit ideally be set up to integrate two input streams (object location and identity) in case the overall synaptic changes should be minimized during ongoing learning? This letter provides a theoretical framework on how the two input pathways should ideally be specified. Generally the model predicts that the information-rich pathway should be plastic and encoded sparsely, whereas the pathway conveying less information should be encoded densely and undergo learning only if a neuronal representation of a novel object has to be established. As an example, we consider hippocampal area CA1, which combines place and object information. The model thereby provides a normative account of hippocampal rate remapping, that is, modulations of place field activity by changes of local cues. It may as well be applicable to other brain areas (such as neocortical layer V) that learn combinatorial codes from multiple input streams.

  14. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.

  15. Towards practical control design using neural computation

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Mattern, Duane; Merrill, Walter

    1991-01-01

    The objective is to develop neural network based control design techniques which address the issue of performance/control effort tradeoff. Additionally, the control design needs to address the important issue if achieving adequate performance in the presence of actuator nonlinearities such as position and rate limits. These issues are discussed using the example of aircraft flight control. Given a set of pilot input commands, a feedforward net is trained to control the vehicle within the constraints imposed by the actuators. This is achieved by minimizing an objective function which is the sum of the tracking errors, control input rates and control input deflections. A tradeoff between tracking performance and control smoothness is obtained by varying, adaptively, the weights of the objective function. The neurocontroller performance is evaluated in the presence of actuator dynamics using a simulation of the vehicle. Appropriate selection of the different weights in the objective function resulted in the good tracking of the pilot commands and smooth neurocontrol. An extension of the neurocontroller design approach is proposed to enhance its practicality.

  16. Information to cerebellum on spinal motor networks mediated by the dorsal spinocerebellar tract

    PubMed Central

    Stecina, Katinka; Fedirchuk, Brent; Hultborn, Hans

    2013-01-01

    The main objective of this review is to re-examine the type of information transmitted by the dorsal and ventral spinocerebellar tracts (DSCT and VSCT respectively) during rhythmic motor actions such as locomotion. Based on experiments in the 1960s and 1970s, the DSCT was viewed as a relay of peripheral sensory input to the cerebellum in general, and during rhythmic movements such as locomotion and scratch. In contrast, the VSCT was seen as conveying a copy of the output of spinal neuronal circuitry, including those circuits generating rhythmic motor activity (the spinal central pattern generator, CPG). Emerging anatomical and electrophysiological information on the putative subpopulations of DSCT and VSCT neurons suggest differentiated functions for some of the subpopulations. Multiple lines of evidence support the notion that sensory input is not the only source driving DSCT neurons and, overall, there is a greater similarity between DSCT and VSCT activity than previously acknowledged. Indeed the majority of DSCT cells can be driven by spinal CPGs for locomotion and scratch without phasic sensory input. It thus seems natural to propose the possibility that CPG input to some of these neurons may contribute to distinguishing sensory inputs that are a consequence of the active locomotion from those resulting from perturbations in the external world. PMID:23613538

  17. Kekule.js: An Open Source JavaScript Chemoinformatics Toolkit.

    PubMed

    Jiang, Chen; Jin, Xi; Dong, Ying; Chen, Ming

    2016-06-27

    Kekule.js is an open-source, object-oriented JavaScript toolkit for chemoinformatics. It provides methods for many common tasks in molecular informatics, including chemical data input/output (I/O), two- and three-dimensional (2D/3D) rendering of chemical structure, stereo identification, ring perception, structure comparison, and substructure search. Encapsulated widgets to display and edit chemical structures directly in web context are also supplied. Developed with web standards, the toolkit is ideal for building chemoinformatics applications over the Internet. Moreover, it is highly platform-independent and can also be used in desktop or mobile environments. Some initial applications, such as plugins for inputting chemical structures on the web and uses in chemistry education, have been developed based on the toolkit.

  18. Pseudolocal tomography

    DOEpatents

    Katsevich, Alexander J.; Ramm, Alexander G.

    1996-01-01

    Local tomographic data is used to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. The relative attenuation data is input to a pseudo-local tomography function, where the difference between the internal density and the pseudo-local tomography function is computed across the discontinuity. The pseudo-local tomography function outputs the location of the discontinuity and the difference in density between the first density and the second density.

  19. Pseudolocal tomography

    DOEpatents

    Katsevich, A.J.; Ramm, A.G.

    1996-07-23

    Local tomographic data is used to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. The relative attenuation data is input to a pseudo-local tomography function, where the difference between the internal density and the pseudo-local tomography function is computed across the discontinuity. The pseudo-local tomography function outputs the location of the discontinuity and the difference in density between the first density and the second density. 7 figs.

  20. Simulating the Performance of Ground-Based Optical Asteroid Surveys

    NASA Astrophysics Data System (ADS)

    Christensen, Eric J.; Shelly, Frank C.; Gibbs, Alex R.; Grauer, Albert D.; Hill, Richard E.; Johnson, Jess A.; Kowalski, Richard A.; Larson, Stephen M.

    2014-11-01

    We are developing a set of asteroid survey simulation tools in order to estimate the capability of existing and planned ground-based optical surveys, and to test a variety of possible survey cadences and strategies. The survey simulator is composed of several layers, including a model population of solar system objects and an orbital integrator, a site-specific atmospheric model (including inputs for seeing, haze and seasonal cloud cover), a model telescope (with a complete optical path to estimate throughput), a model camera (including FOV, pixel scale, and focal plane fill factor) and model source extraction and moving object detection layers with tunable detection requirements. We have also developed a flexible survey cadence planning tool to automatically generate nightly survey plans. Inputs to the cadence planner include camera properties (FOV, readout time), telescope limits (horizon, declination, hour angle, lunar and zenithal avoidance), preferred and restricted survey regions in RA/Dec, ecliptic, and Galactic coordinate systems, and recent coverage by other asteroid surveys. Simulated surveys are created for a subset of current and previous NEO surveys (LINEAR, Pan-STARRS and the three Catalina Sky Survey telescopes), and compared against the actual performance of these surveys in order to validate the model’s performance. The simulator tracks objects within the FOV of any pointing that were not discovered (e.g. too few observations, too trailed, focal plane array gaps, too fast or slow), thus dividing the population into “discoverable” and “discovered” subsets, to inform possible survey design changes. Ongoing and future work includes generating a realistic “known” subset of the model NEO population, running multiple independent simulated surveys in coordinated and uncoordinated modes, and testing various cadences to find optimal strategies for detecting NEO sub-populations. These tools can also assist in quantifying the efficiency of novel yet unverified survey cadences (e.g. the baseline LSST cadence) that sparsely spread the observations required for detection over several days or weeks.

  1. Hydrogen Assisted Fracture of Stainless Steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugar, Joshua Daniel; Somerday, Brian P.; Homer, Mark

    2016-02-01

    The Enhanced Surveillance Sub-program has an annual NNSA requirement to submit a comprehensive report on all our fiscal year activities right after the start of the next calendar year. As most of you know, we collate all of our PI task submissions into a single volume that we send to NNSA, our customers, and use for other programmatic purposes. The functional objective of this report is to formally document the purpose, status, and accomplishments and impacts of all our work. For your specific submission, please follow the instructions described below and use the template provided. These are essentially the samemore » as was used last year. We recognize this report may also include information on specific age-related findings that you will provide again in a few months as input to the Stockpile Annual Assessment process (e.g., in the submittal of your Component Assessment Report). However, the related content of your ES AR input should provide an excellent foundation that can simply be updated as needed for your Annual Assessment input.« less

  2. Otolith and Vertical Canal Contributions to Dynamic Postural Control

    NASA Technical Reports Server (NTRS)

    Black, F. Owen

    1999-01-01

    The objective of this project is to determine: 1) how do normal subjects adjust postural movements in response to changing or altered otolith input, for example, due to aging? and 2) how do patients adapt postural control after altered unilateral or bilateral vestibular sensory inputs such as ablative inner ear surgery or ototoxicity, respectively? The following hypotheses are under investigation: 1) selective alteration of otolith input or abnormalities of otolith receptor function will result in distinctive spatial, frequency, and temporal patterns of head movements and body postural sway dynamics. 2) subjects with reduced, altered, or absent vertical semicircular canal receptor sensitivity but normal otolith receptor function or vice versa, should show predictable alterations of body and head movement strategies essential for the control of postural sway and movement. The effect of altered postural movement control upon compensation and/or adaptation will be determined. These experiments provide data for the development of computational models of postural control in normals, vestibular deficient subjects and normal humans exposed to unusual force environments, including orbital space flight.

  3. Lightweight Object Oriented Structure analysis: Tools for building Tools to Analyze Molecular Dynamics Simulations

    PubMed Central

    Romo, Tod D.; Leioatts, Nicholas; Grossfield, Alan

    2014-01-01

    LOOS (Lightweight Object-Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 120 pre-built tools, including suites of tools for analyzing simulation convergence, 3D histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only 4 core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. PMID:25327784

  4. Lightweight object oriented structure analysis: tools for building tools to analyze molecular dynamics simulations.

    PubMed

    Romo, Tod D; Leioatts, Nicholas; Grossfield, Alan

    2014-12-15

    LOOS (Lightweight Object Oriented Structure-analysis) is a C++ library designed to facilitate making novel tools for analyzing molecular dynamics simulations by abstracting out the repetitive tasks, allowing developers to focus on the scientifically relevant part of the problem. LOOS supports input using the native file formats of most common biomolecular simulation packages, including CHARMM, NAMD, Amber, Tinker, and Gromacs. A dynamic atom selection language based on the C expression syntax is included and is easily accessible to the tool-writer. In addition, LOOS is bundled with over 140 prebuilt tools, including suites of tools for analyzing simulation convergence, three-dimensional histograms, and elastic network models. Through modern C++ design, LOOS is both simple to develop with (requiring knowledge of only four core classes and a few utility functions) and is easily extensible. A python interface to the core classes is also provided, further facilitating tool development. © 2014 Wiley Periodicals, Inc.

  5. Near field optical probe for critical dimension measurements

    DOEpatents

    Stallard, Brian R.; Kaushik, Sumanth

    1999-01-01

    A resonant planar optical waveguide probe for measuring critical dimensions on an object in the range of 100 nm and below. The optical waveguide includes a central resonant cavity flanked by Bragg reflector layers with input and output means at either end. Light is supplied by a narrow bandwidth laser source. Light resonating in the cavity creates an evanescent electrical field. The object with the structures to be measured is translated past the resonant cavity. The refractive index contrasts presented by the structures perturb the field and cause variations in the intensity of the light in the cavity. The topography of the structures is determined from these variations.

  6. Crew behavior and performance in space analog environments

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.

    1992-01-01

    The objectives and the current status of the Crew Factors research program conducted at NASA-Ames Research Center are reviewed. The principal objectives of the program are to determine the effects of a broad class of input variables on crew performance and to provide guidance with respect to the design and management of crews assigned to future space missions. A wide range of research environments are utilized, including controlled experimental settings, high fidelity full mission simulator facilities, and fully operational field environments. Key group processes are identified, and preliminary data are presented on the effect of crew size, type, and structure on team performance.

  7. A computer-aided approach to nonlinear control systhesis

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Anthony, Tobin

    1988-01-01

    The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.

  8. Representation of Non-Spatial and Spatial Information in the Lateral Entorhinal Cortex

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2011-01-01

    Some theories of memory propose that the hippocampus integrates the individual items and events of experience within a contextual or spatial framework. The hippocampus receives cortical input from two major pathways: the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). During exploration in an open field, the firing fields of MEC grid cells form a periodically repeating, triangular array. In contrast, LEC neurons show little spatial selectivity, and it has been proposed that the LEC may provide non-spatial input to the hippocampus. Here, we recorded MEC and LEC neurons while rats explored an open field that contained discrete objects. LEC cells fired selectively at locations relative to the objects, whereas MEC cells were weakly influenced by the objects. These results provide the first direct demonstration of a double dissociation between LEC and MEC inputs to the hippocampus under conditions of exploration typically used to study hippocampal place cells. PMID:22065409

  9. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  10. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  11. A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input

    NASA Astrophysics Data System (ADS)

    Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.

    2016-10-01

    Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.

  12. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  13. The Influence of Objects on Place Field Expression and Size in Distal Hippocampal CA1

    PubMed Central

    Burke, S.N.; Maurer, A.P.; Nematollahi, S.; Uprety, A.R.; Wallace, J.L.; Barnes, C.A.

    2012-01-01

    The perirhinal and lateral entorhinal cortices send prominent projections to the portion of the hippocampal CA1 subfield closest to the subiculum, but relatively little is known regarding the contributions of these cortical areas to hippocampal activity patterns. The anatomical connections of the lateral entorhinal and perirhinal cortices, as well as lesion data, suggest that these brain regions may contribute to the perception of complex stimuli such as objects. The current experiments investigated the degree to which 3-dimensional objects affect place field size and activity within the distal region (closest to the subiculum) of CA1. The activity of CA1 pyramidal cells was monitored as rats traversed a circular track that contained no objects in some conditions and 3-dimensial objects in other conditions. In the area of CA1 that receives direct lateral entorhinal input, three factors differentiated the objects-on-track conditions from the no-object conditions: more pyramidal cells expressed place fields when objects were present, adding or removing objects from the environment led to partial remapping in CA1, and the size of place fields decreased when objects were present. Additionally, a proportion of place fields remapped under conditions in which the object locations were shuffled, which suggests that at least some of the CA1 neurons’ firing patterns were sensitive to a particular object in a particular location. Together, these data suggest that the activity characteristics of neurons in the areas of CA1 receiving direct input from the perirhinal and lateral entorhinal cortices are modulated by non-spatial sensory input such as 3-dimensional objects. PMID:21365714

  14. Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.

    PubMed

    Ho, Kevin I-J; Leung, Chi-Sing; Sum, John

    2010-06-01

    In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.

  15. Biophysics of object segmentation in a collision-detecting neuron

    PubMed Central

    Dewell, Richard Burkett

    2018-01-01

    Collision avoidance is critical for survival, including in humans, and many species possess visual neurons exquisitely sensitive to objects approaching on a collision course. Here, we demonstrate that a collision-detecting neuron can detect the spatial coherence of a simulated impending object, thereby carrying out a computation akin to object segmentation critical for proper escape behavior. At the cellular level, object segmentation relies on a precise selection of the spatiotemporal pattern of synaptic inputs by dendritic membrane potential-activated channels. One channel type linked to dendritic computations in many neural systems, the hyperpolarization-activated cation channel, HCN, plays a central role in this computation. Pharmacological block of HCN channels abolishes the neuron's spatial selectivity and impairs the generation of visually guided escape behaviors, making it directly relevant to survival. Additionally, our results suggest that the interaction of HCN and inactivating K+ channels within active dendrites produces neuronal and behavioral object specificity by discriminating between complex spatiotemporal synaptic activation patterns. PMID:29667927

  16. Hand controller study of force and control mode

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1992-01-01

    The objectives are to compare and evaluate the utility and effectiveness of various input control devices, e.g., hand controllers, with respect to the relative importance of force and operation control mode (rate or position) for Space Station Freedom (SSF) related tasks. The topics are presented in viewgraph form and include the: Intelligent Research Systems Lab (ISRL) experimental design; Telerobotic Systems Research Laboratory (TSRL) final experimental design; and factor analysis summary of results.

  17. Dialect Distance Assessment Based on 2-Dimensional Pitch Slope Features and Kullback Leibler Divergence

    DTIC Science & Technology

    2009-04-08

    to changes on input data is quantified. It is also shown in a perceptive evaluation that the presented objective approach of dialect distance...of Arabic dialects are discussed. We also show the repeatability of presented mea- sure, and its correlation with human perception . Conclusions are...in the strict sense of metric spaces. PREPRINT 1 2. Proposed Method Human perception tests indicate that prosodic cues, including pitch movements

  18. Existing pavement input information for the mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2009-02-01

    The objective of this study is to systematically evaluate the Iowa Department of Transportations (DOTs) existing Pavement Management Information System (PMIS) with respect to the input information required for Mechanistic-Empirical Pavement Des...

  19. ITS component specification. Appendix B, Input data flows for components

    DOT National Transportation Integrated Search

    1997-11-01

    The objective of the Polaris Project is to define an Intelligent Transportation Systems (ITS) architecture for the state of Minnesota. This appendix defines the input data flows for each component of the Polaris Physical Architecture.

  20. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  1. Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models

    PubMed Central

    Klouček, Tomáš; Šímová, Petra

    2018-01-01

    Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km2, covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary. PMID:29844982

  2. Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models.

    PubMed

    Lagner, Ondřej; Klouček, Tomáš; Šímová, Petra

    2018-01-01

    Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km 2 , covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary.

  3. Area collapse algorithm computing new curve of 2D geometric objects

    NASA Astrophysics Data System (ADS)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  4. Conditioning 3D object-based models to dense well data

    NASA Astrophysics Data System (ADS)

    Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.

    2018-06-01

    Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.

  5. Guidelines for PCC inputs to AASHTOWare Pavement ME.

    DOT National Transportation Integrated Search

    2014-12-01

    The objective of this research study was to develop guidelines for portland cement concrete (PCC) material inputs to the : AASHTOWare Pavement ME Design program. The AASHTOWare Pavement ME Design is the software program used by the : Mississippi Depa...

  6. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  7. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  8. SU-E-T-206: Improving Radiotherapy Toxicity Based On Artificial Neural Network (ANN) for Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu

    Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less

  9. Focal ratio degradation in lightly fused hexabundles

    NASA Astrophysics Data System (ADS)

    Bryant, J. J.; Bland-Hawthorn, J.; Fogarty, L. M. R.; Lawrence, J. S.; Croom, S. M.

    2014-02-01

    We are now moving into an era where multi-object wide-field surveys, which traditionally use single fibres to observe many targets simultaneously, can exploit compact integral field units (IFUs) in place of single fibres. Current multi-object integral field instruments such as Sydney-AAO Multi-object Integral field spectrograph have driven the development of new imaging fibre bundles (hexabundles) for multi-object spectrographs. We have characterized the performance of hexabundles with different cladding thicknesses and compared them to that of the same type of bare fibre, across the range of fill fractions and input f-ratios likely in an IFU instrument. Hexabundles with 7-cores and 61-cores were tested for focal ratio degradation (FRD), throughput and cross-talk when fed with inputs from F/3.4 to >F/8. The five 7-core bundles have cladding thickness ranging from 1 to 8 μm, and the 61-core bundles have 5 μm cladding. As expected, the FRD improves as the input focal ratio decreases. We find that the FRD and throughput of the cores in the hexabundles match the performance of single fibres of the same material at low input f-ratios. The performance results presented can be used to set a limit on the f-ratio of a system based on the maximum loss allowable for a planned instrument. Our results confirm that hexabundles are a successful alternative for fibre imaging devices for multi-object spectroscopy on wide-field telescopes and have prompted further development of hexabundle designs with hexagonal packing and square cores.

  10. Output Control Using Feedforward And Cascade Controllers

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Report presents theoretical study of open-loop control elements in single-input, single-output linear system. Focus on output-control (servomechanism) problem, in which objective is to find control scheme that causes output to track certain command inputs and to reject certain disturbance inputs in steady state. Report closes with brief discussion of characteristics and relative merits of feedforward, cascade, and feedback controllers and combinations thereof.

  11. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  12. The Effect of Input-Based Instruction Type on the Acquisition of Spanish Accusative Clitics

    ERIC Educational Resources Information Center

    White, Justin

    2015-01-01

    The purpose of this paper is to compare structured input (SI) with other input-based instructional treatments. The input-based instructional types include: input flood (IF), text enhancement (TE), SI activities, and focused input (FI; SI without implicit negative feedback). Participants included 145 adult learners enrolled in an intermediate…

  13. Structural tailoring of engine blades (STAEBL) user's manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1985-01-01

    This User's Manual contains instructions and demonstration case to prepare input data, run, and modify the Structural Tailoring of Engine Blades (STAEBL) computer code. STAEBL was developed to perform engine fan and compressor blade numerical optimizations. This blade optimization seeks a minimum weight or cost design that satisfies realistic blade design constraints, by tuning one to twenty design variables. The STAEBL constraint analyses include blade stresses, vibratory response, flutter, and foreign object damage. Blade design variables include airfoil thickness at several locations, blade chord, and construction variables: hole size for hollow blades, and composite material layup for composite blades.

  14. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  15. Object width modulates object-based attentional selection.

    PubMed

    Nah, Joseph C; Neppi-Modona, Marco; Strother, Lars; Behrmann, Marlene; Shomstein, Sarah

    2018-04-24

    Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2-4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width.

  16. Systems and methods for reconfiguring input devices

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  17. Near field optical probe for critical dimension measurements

    DOEpatents

    Stallard, B.R.; Kaushik, S.

    1999-05-18

    A resonant planar optical waveguide probe for measuring critical dimensions on an object in the range of 100 nm and below is disclosed. The optical waveguide includes a central resonant cavity flanked by Bragg reflector layers with input and output means at either end. Light is supplied by a narrow bandwidth laser source. Light resonating in the cavity creates an evanescent electrical field. The object with the structures to be measured is translated past the resonant cavity. The refractive index contrasts presented by the structures perturb the field and cause variations in the intensity of the light in the cavity. The topography of the structures is determined from these variations. 8 figs.

  18. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  19. NASA C-17 Usage Overview

    NASA Technical Reports Server (NTRS)

    Miller, Christopher R.

    2008-01-01

    The usage and integrated vehicle health management of the NASA C-17. Propulsion health management flight objectives for the aircraft include mapping of the High Pressure Compressor in order to calibrate a Pratt and Whitney engine model and the fusion of data collected from existing sensors and signals to develop models, analysis methods and information fusion algorithms. An additional health manage flight objective is to demonstrate that the Commercial Modular Aero-Propulsion Systems Simulation engine model can successfully execute in real time onboard the C-17 T-1 aircraft using engine and aircraft flight data as inputs. Future work will address aircraft durability and aging, airframe health management, and propulsion health management research in the areas of gas path and engine vibration.

  20. Effects of focus and definiteness on children's word order: evidence from German five-year-olds' reproductions of double object constructions.

    PubMed

    Höhle, Barbara; Hörnig, Robin; Weskott, Thomas; Knauf, Selene; Krüger, Agnes

    2014-07-01

    Two experiments tested how faithfully German children aged 4 ;5 to 5 ;6 reproduce ditransitive sentences that are unmarked or marked with respect to word order and focus (Exp1) or definiteness (Exp2). Adopting an optimality theory (OT) approach, it is assumed that in the German adult grammar word order is ranked lower than focus and definiteness. Faithfulness of children's reproductions decreased as markedness of inputs increased; unmarked structures were reproduced most faithfully and unfaithful outputs had most often an unmarked form. Consistent with the OT proposal, children were more tolerant against inputs marked for word order than for focus; in conflict with the proposal, children were less tolerant against inputs marked for word order than for definiteness. Our results suggest that the linearization of objects in German double object constructions is affected by focus and definiteness, but that prosodic principles may have an impact on the position of a focused constituent.

  1. Constraints in distortion-invariant target recognition system simulation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Razzaque, Md A.

    2000-11-01

    Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.

  2. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  3. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  4. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations. PMID:25538637

  6. Position, scale, and rotation invariant holographic associative memory

    NASA Astrophysics Data System (ADS)

    Fielding, Kenneth H.; Rogers, Steven K.; Kabrisky, Matthew; Mills, James P.

    1989-08-01

    This paper describes the development and characterization of a holographic associative memory (HAM) system that is able to recall stored objects whose inputs were changed in position, scale, and rotation. The HAM is based on the single iteration model described by Owechko et al. (1987); however, the system described uses a self-pumped BaTiO3 phase conjugate mirror, rather than a degenerate four-wave mixing proposed by Owechko and his coworkers. The HAM system can store objects in a position, scale, and rotation invariant feature space. The angularly multiplexed diffuse Fourier transform holograms of the HAM feature space are characterized as the memory unit; distorted input objects are correlated with the hologram, and the nonlinear phase conjugate mirror reduces cross-correlation noise and provides object discrimination. Applications of the HAM system are presented.

  7. A Theory of How Columns in the Neocortex Enable Learning the Structure of the World

    PubMed Central

    Hawkins, Jeff; Ahmad, Subutai; Cui, Yuwei

    2017-01-01

    Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed. PMID:29118696

  8. Octopus Cells in the Posteroventral Cochlear Nucleus Provide the Main Excitatory Input to the Superior Paraolivary Nucleus

    PubMed Central

    Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.

    2017-01-01

    Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283

  9. GoPhast: a graphical user interface for PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2006-01-01

    GoPhast is a graphical user interface (GUI) for the USGS model PHAST. PHAST simulates multicomponent, reactive solute transport in three-dimensional, saturated, ground-water flow systems. PHAST can model both equilibrium and kinetic geochemical reactions. PHAST is derived from HST3D (flow and transport) and PHREEQC (geochemical calculations). The flow and transport calculations are restricted to constant fluid density and constant temperature. The complexity of the input required by PHAST makes manual construction of its input files tedious and error-prone. GoPhast streamlines the creation of the input file and helps reduce errors. GoPhast allows the user to define the spatial input for the PHAST flow and transport data file by drawing points, lines, or polygons on top, front, and side views of the model domain. These objects can have up to two associated formulas that define their extent perpendicular to the view plane, allowing the objects to be three-dimensional. Formulas are also used to specify the values of spatial data (data sets) both globally and for individual objects. Objects can be used to specify the values of data sets independent of the spatial and temporal discretization of the model. Thus, the grid and simulation periods for the model can be changed without respecifying spatial data pertaining to the hydrogeologic framework and boundary conditions. This report describes the operation of GoPhast and demonstrates its use with examples. GoPhast runs on Windows 2000, Windows XP, and Linux operating systems.

  10. A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Hwa-Soo

    1991-01-01

    A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.

  11. Exploring objective climate classification for the Himalayan arc and adjacent regions using gridded data sources

    NASA Astrophysics Data System (ADS)

    Forsythe, N.; Blenkinsop, S.; Fowler, H. J.

    2015-05-01

    A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.

  12. A study of safety and tolerability of rotatory vestibular input for preschool children

    PubMed Central

    Su, Wen-Ching; Lin, Chin-Kai; Chang, Shih-Chung

    2015-01-01

    The objectives of this study were to determine a safe rotatory vestibular stimulation input for preschool children and to study the effects of grade level and sex of preschool children during active, passive, clockwise, and counterclockwise rotation vestibular input. This study adopted purposive sampling with 120 children from three kindergarten levels (K1, K2, and K3) in Taiwan. The subjects ranged in age from 46 to 79 months of age (mean: 62.1 months; standard deviation =9.60). This study included testing with four types of vestibular rotations. The number, duration, and speed of rotations were recorded. The study found that the mean number of active rotations was 10.28; the mean duration of rotation was 24.17 seconds; and the mean speed was 2.29 seconds per rotation. The mean number of passive rotations was 23.04. The differences in number of rotations in clockwise, counterclockwise, active, and passive rotations were not statistically significant. Sex and grade level were not important related factors in the speed and time of active rotation. Different sexes, rotation methods (active, passive), and grades made significant differences in the number of rotations. The safety and tolerability of rotatory vestibular stimulation input data obtained in this study can provide useful reference data for therapists using sensory integration therapy. PMID:25657579

  13. Feasibility study of parallel optical correlation-decoding analysis of lightning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Descour, M.R.; Sweatt, W.C.; Elliott, G.R.

    The optical correlator described in this report is intended to serve as an attention-focusing processor. The objective is to narrowly bracket the range of a parameter value that characterizes the correlator input. The input is a waveform collected by a satellite-borne receiver. In the correlator, this waveform is simultaneously correlated with an ensemble of ionosphere impulse-response functions, each corresponding to a different total-electron-count (TEC) value. We have found that correlation is an effective method of bracketing the range of TEC values likely to be represented by the input waveform. High accuracy in a computational sense is not required of themore » correlator. Binarization of the impulse-response functions and the input waveforms prior to correlation results in a lower correlation-peak-to-background-fluctuation (signal-to-noise) ratio than the peak that is obtained when all waveforms retain their grayscale values. The results presented in this report were obtained by means of an acousto-optic correlator previously developed at SNL as well as by simulation. An optical-processor architecture optimized for 1D correlation of long waveforms characteristic of this application is described. Discussions of correlator components, such as optics, acousto-optic cells, digital micromirror devices, laser diodes, and VCSELs are included.« less

  14. Stream water chemistry in watersheds receiving different atmospheric inputs of H+, NH4+, NO3-, and SO42-1

    USGS Publications Warehouse

    Stottlemyer, R.

    1997-01-01

    Weekly precipitation and stream water samples were collected from small watersheds in Denali National Park, Alaska, the Fraser Experimental Forest, Colorado, Isle Royale National Park, Michigan, and the Calumet watershed on the south shore of Lake Superior, Michigan. The objective was to determine if stream water chemistry at the mouth and upstream stations reflected precipitation chemistry across a range of atmospheric inputs of H+, NH4+, NO3-, and SO42-. Volume-weighted precipitation H+, NH4+, NO3-, and SO42- concentrations varied 4 to 8 fold with concentrations highest at Calumet and lowest in Denali. Stream water chemistry varied among sites, but did not reflect precipitation chemistry. The Denali watershed, Rock Creek, had the lowest precipitation NO3- and SO42- concentrations, but the highest stream water NO3and SO42- concentrations. Among sites, the ratio of mean monthly upstream NO3- concentration to precipitation NO3- concentration declined (p 90 percent inputs) across inputs ranging from 0.12 to > 6 kg N ha-1 y-1. Factors possibly accounting for the weak or non-existent signal between stream water and precipitation ion concentrations include rapid modification of meltwater and precipitation chemistry by soil processes, and the presence of unfrozen soils which permits winter mineralization and nitrification to occur.

  15. Operation Redwing -- Project 5. 2. In-flight participation of a B-52. Report for May-July 1956

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, F.L.

    1985-04-01

    The primary objective of this Project was to obtain measured-energy input and aircraft-response data on an instrumented B-52 aircraft when subjected to the thermal, blast, and gust effects of a nuclear explosion. To accomplish this, analysis was used in selecting the spatial location for the B-52, relative to a detonation, that would result in the desired aircraft inputs and responses. The B-52 was extensively instrumented with the major portion of the instrumentation devoted to measuring aircraft responses. The B-52 participated in nine shots, including one shot which the aircraft aborted just prior to time zero because of Bombing Navigation Systemmore » difficulties. The reliability of the instrumentation system was between 95% and 100% throughout the test program.« less

  16. Using multi-objective robust decision making to support seasonal water management in the Chao Phraya River basin, Thailand

    NASA Astrophysics Data System (ADS)

    Riegels, Niels; Jessen, Oluf; Madsen, Henrik

    2016-04-01

    A multi-objective robust decision making approach is demonstrated that supports seasonal water management in the Chao Phraya River basin in Thailand. The approach uses multi-objective optimization to identify a Pareto-optimal set of management alternatives. Ensemble simulation is used to evaluate how each member of the Pareto set performs under a range of uncertain future conditions, and a robustness criterion is used to select a preferred alternative. Data mining tools are then used to identify ranges of uncertain factor values that lead to unacceptable performance for the preferred alternative. The approach is compared to a multi-criteria scenario analysis approach to estimate whether the introduction of additional complexity has the potential to improve decision making. Dry season irrigation in Thailand is managed through non-binding recommendations about the maximum extent of rice cultivation along with incentives for less water-intensive crops. Management authorities lack authority to prevent river withdrawals for irrigation when rice cultivation exceeds recommendations. In practice, this means that water must be provided to irrigate the actual planted area because of downstream municipal water supply requirements and water quality constraints. This results in dry season reservoir withdrawals that exceed planned withdrawals, reducing carryover storage to hedge against insufficient wet season runoff. The dry season planning problem in Thailand can therefore be framed in terms of decisions, objectives, constraints, and uncertainties. Decisions include recommendations about the maximum extent of rice cultivation and incentives for growing less water-intensive crops. Objectives are to maximize benefits to farmers, minimize the risk of inadequate carryover storage, and minimize incentives. Constraints include downstream municipal demands and water quality requirements. Uncertainties include the actual extent of rice cultivation, dry season precipitation, and precipitation in the following wet season. The multi-objective robust decision making approach is implemented as follows. First, three baseline simulation models are developed, including a crop water demand model, a river basin simulation model, and model of the impact of incentives on cropping patterns. The crop water demand model estimates irrigation water demands; the river basin simulation model estimates reservoir drawdown required to meet demands given forecasts of precipitation, evaporation, and runoff; the model of incentive impacts estimates the cost of incentives as function of marginal changes in rice yields. Optimization is used to find a set of non-dominated alternatives as a function of rice area and incentive decisions. An ensemble of uncertain model inputs is generated to represent uncertain hydrological and crop area forecasts. An ensemble of indicator values is then generated for each of the decision objectives: farmer benefits, end-of-wet-season reservoir storage, and the cost of incentives. A single alternative is selected from the Pareto set using a robustness criterion. Threshold values are defined for each of the objectives to identify ensemble members for which objective values are unacceptable, and the PRIM data mining algorithm is then used to identify input values associated with unacceptable model outcomes.

  17. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  18. How Methodologic Differences Affect Results of Economic Analyses: A Systematic Review of Interferon Gamma Release Assays for the Diagnosis of LTBI

    PubMed Central

    Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick

    2013-01-01

    Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412

  19. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  20. Tele-Autonomous control involving contact. Final Report Thesis; [object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.

    1990-01-01

    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.

  1. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  2. Common input to motor units of intrinsic and extrinsic hand muscles during two-digit object hold.

    PubMed

    Winges, Sara A; Kornatz, Kurt W; Santello, Marco

    2008-03-01

    Anatomical and physiological evidence suggests that common input to motor neurons of hand muscles is an important neural mechanism for hand control. To gain insight into the synaptic input underlying the coordination of hand muscles, significant effort has been devoted to describing the distribution of common input across motor units of extrinsic muscles. Much less is known, however, about the distribution of common input to motor units belonging to different intrinsic muscles and to intrinsic-extrinsic muscle pairs. To address this void in the literature, we quantified the incidence and strength of near-simultaneous discharges of motor units residing in either the same or different intrinsic hand muscles (m. first dorsal, FDI, and m. first palmar interosseus, FPI) during two-digit object hold. To extend the characterization of common input to pairs of extrinsic muscles (previous work) and pairs of intrinsic muscles (present work), we also recorded electromyographic (EMG) activity from an extrinsic thumb muscle (m. flexor pollicis longus, FPL). Motor-unit synchrony across FDI and FPI was weak (common input strength, CIS, mean +/- SE: 0.17 +/- 0.02). Similarly, motor units from extrinsic-intrinsic muscle pairs were characterized by weak synchrony (FPL-FDI: 0.25 +/- 0.02; FPL-FPI: 0.29 +/- 0.03) although stronger than FDI-FPI. Last, CIS from within FDI and FPI was more than three times stronger (0.70 +/- 0.06 and 0.66 +/- 0.06, respectively) than across these muscles. We discuss present and previous findings within the framework of muscle-pair specific distribution of common input to hand muscles based on their functional role in grasping.

  3. MAVEN - Mars Atmosphere and Volatile EvolutioN Mission

    NASA Technical Reports Server (NTRS)

    Grebowsky, Joseph M.; Jakosky, Bruce M.

    2011-01-01

    NASA's MAVEN mission (to be launched in late 2013) is the first mission to Mars devoted to sampling all of the upper atmosphere neutral and plasma environments, including the well-mixed atmosphere, the exosphere, ionosphere, outer magnetosphere and near-Mars solar wind. It will fill in some measurement gaps remaining from the successful Mars Global Surveyor and the on-going Mars Express missions. The primary science objectives of MAVEN are: 1. Provide a comprehensive picture of the present state of the upper atmosphere and ionosphere of Mars; 2. Understand the processes controlling the present state; and 3. Determine how loss of volatiles to outer space in the present epoch varies with changing solar condition - EUY, solar wind and interplanetary magnetic field measurements will provide the varying solar energy inputs into the system. Knowing how these processes respond to the Sun's energy inputs in the current epoch will provide a framework for projecting atmospheric processes back in time to profile MARS' atmospheric evolution and to explore "where the water went", A description will be given of the science objectives, the instruments, and the current status of the project, emphasizing the value of having collaborations between the MAVEN project and the Mars upper atmosphere science community.

  4. Understanding the decision-making environment for people in minimally conscious state.

    PubMed

    Yelden, Kudret; Sargent, Sarah; Samanta, Jo

    2017-04-11

    Patients in minimally conscious state (MCS) show minimal, fluctuating but definitive signs of awareness of themselves and their environments. They may exhibit behaviours ranging from the ability to track objects or people with their eyes, to the making of simple choices which requires the ability to recognise objects and follow simple commands. While patients with MCS have higher chances of further recovery than people in vegetative states, this is not guaranteed and their prognosis is fundamentally uncertain. Therefore, patients with MCS need regular input from healthcare professionals to monitor their progress (or non-progress) and to address their needs for rehabilitation, for the provision of an appropriate environment and equipment. These requirements form a backdrop to the potentially huge variety of ethical-legal dilemmas that may be faced by their families, caregivers and ultimately, the courts. This paper analyses the decision-making environment for people with MCS using data obtained through four focus groups which included the input of 29 senior decision makers in the area. The results of the focus group study are presented and further explored with attention on recurrent and strong themes such as lack of expertise, resource issues, and the influence of families and friends of people with MCS.

  5. Point Analysis in Java applied to histological images of the perforant pathway: a user's account.

    PubMed

    Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán

    2008-01-01

    The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.

  6. Multicriteria decision analysis applied to Glen Canyon Dam

    USGS Publications Warehouse

    Flug, M.; Seitz, H.L.H.; Scott, J.F.

    2000-01-01

    Conflicts in water resources exist because river-reservoir systems are managed to optimize traditional benefits (e.g., hydropower and flood control), which are historically quantified in economic terms, whereas natural and environmental resources, including in-stream and riparian resources, are more difficult or impossible to quantify in economic terms. Multicriteria decision analysis provides a quantitative approach to evaluate resources subject to river basin management alternatives. This objective quantification method includes inputs from special interest groups, the general public, and concerned individuals, as well as professionals for each resource considered in a trade-off analysis. Multicriteria decision analysis is applied to resources and flow alternatives presented in the environmental impact statement for Glen Canyon Dam on the Colorado River. A numeric rating and priority-weighting scheme is used to evaluate 29 specific natural resource attributes, grouped into seven main resource objectives, for nine flow alternatives enumerated in the environmental impact statement.

  7. Evolution of Scientific and Technical Information Distribution

    NASA Technical Reports Server (NTRS)

    Esler, Sandra; Nelson, Michael L.

    1998-01-01

    World Wide Web (WWW) and related information technologies are transforming the distribution of scientific and technical information (STI). We examine 11 recent, functioning digital libraries focusing on the distribution of STI publications, including journal articles, conference papers, and technical reports. We introduce 4 main categories of digital library projects: based on the architecture (distributed vs. centralized) and the contributor (traditional publisher vs. authoring individual/organization). Many digital library prototypes merely automate existing publishing practices or focus solely on the digitization of the publishing cycle output, not sampling and capturing elements of the input. Still others do not consider for distribution the large body of "gray literature." We address these deficiencies in the current model of STI exchange by suggesting methods for expanding the scope and target of digital libraries by focusing on a greater source of technical publications and using "buckets," an object-oriented construct for grouping logically related information objects, to include holdings other than technical publications.

  8. Implementing AORN recommended practices for product selection.

    PubMed

    Conrardy, Julie A

    2012-06-01

    This article focuses on the revised AORN "Recommended practices for product selection in perioperative practice settings." Hospitals and ambulatory surgery facilities should have protocols in place for product evaluation that includes a multidisciplinary team approach. The process for product evaluation and selection includes gathering information; establishing consistent requirements for product evaluation; performing a financial impact analysis; investigating a plan to standardize products; conducting an environmental impact analysis; determining whether to purchase single-use, reposable, or reusable products or reprocess single-use devices; developing an evaluation process based on objective criteria; and developing and implementing a comprehensive plan to introduce and use new products. Use of an evaluation tool that is based on objective criteria is one way to obtain valuable input during product evaluations. Because of varied roles and experiences, the perioperative RN is an integral member of the product selection committee. Published by Elsevier Inc.

  9. Generalized local emission tomography

    DOEpatents

    Katsevich, Alexander J.

    1998-01-01

    Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.

  10. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model (version 5.3). We selected the 20 most important parameters out of 53 mHM parameters based on a comprehensive sensitivity analysis (Cuntz et al., 2015). We calibrated 1km-daily mHM for the Skjern basin in Denmark using the Shuffled Complex Evolution (SCE) algorithm and inputs at different spatial scales i.e. meteorological data at 10km and morphological data at 250 meters. We used correlation coefficients between observed monthly (summer months only) MODIS data calculated from cloud free days over the calibration period from 2001 to 2008 and simulated aET from mHM over the same period. Similarly other metrics, e.g mapcurves and fraction skill-score, are also included in our objective function to assess the co-location of the grid-cells. The preliminary results show that multi-objective calibration of mHM against observed streamflow and spatial patterns together does not significantly reduce the spatial errors in aET while it improves the streamflow simulations. This is a strong signal for further investigation of the multi parameter regionalization affecting spatial aET patterns and weighting the spatial metrics in the objective function relative to the streamflow metrics.

  11. Learning to Estimate Dynamical State with Probabilistic Population Codes.

    PubMed

    Makin, Joseph G; Dichter, Benjamin K; Sabes, Philip N

    2015-11-01

    Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.

  12. Learning to Estimate Dynamical State with Probabilistic Population Codes

    PubMed Central

    Sabes, Philip N.

    2015-01-01

    Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. PMID:26540152

  13. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  14. Input-output analysis and the hospital budgeting process.

    PubMed Central

    Cleverly, W O

    1975-01-01

    Two hospitals budget systems, a conventional budget and an input-output budget, are compared to determine how they affect management decisions in pricing, output, planning, and cost control. Analysis of data from a 210-bed not-for-profit hospital indicates that adoption of the input-output budget could cause substantial changes in posted hospital rates in individual departments but probably would have no impact on hospital output determination. The input-output approach promises to be a more accurate system for cost control and planning because, unlike the conventional approach, it generates objective signals for investigating variances of expenses from budgeted levels. PMID:1205865

  15. When a Dog Has a Pen for a Tail: The Time Course of Creative Object Processing

    ERIC Educational Resources Information Center

    Wang, Botao; Duan, Haijun; Qi, Senqing; Hu, Weiping; Zhang, Huan

    2017-01-01

    Creative objects differ from ordinary objects in that they are created by human beings to contain novel, creative information. Previous research has demonstrated that ordinary object processing involves both a perceptual process for analyzing different features of the visual input and a higher-order process for evaluating the relevance of this…

  16. Modal Logics with Counting

    NASA Astrophysics Data System (ADS)

    Areces, Carlos; Hoffmann, Guillaume; Denis, Alexandre

    We present a modal language that includes explicit operators to count the number of elements that a model might include in the extension of a formula, and we discuss how this logic has been previously investigated under different guises. We show that the language is related to graded modalities and to hybrid logics. We illustrate a possible application of the language to the treatment of plural objects and queries in natural language. We investigate the expressive power of this logic via bisimulations, discuss the complexity of its satisfiability problem, define a new reasoning task that retrieves the cardinality bound of the extension of a given input formula, and provide an algorithm to solve it.

  17. Techniques for optically compressing light intensity ranges

    DOEpatents

    Rushford, Michael C.

    1989-01-01

    A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter.

  18. Techniques for optically compressing light intensity ranges

    DOEpatents

    Rushford, M.C.

    1989-03-28

    A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten uranium in an atomic vapor laser isotope separator (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. A number of different means for compressing the intensity range are disclosed. These include the use of photogray glass, the use of a pair of interference filters, and the utilization of a new liquid crystal notch filter in combination with an interference filter. 18 figs.

  19. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  20. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  1. A cognitive approach to vision for a mobile robot

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.

  2. Classification Objects, Ideal Observers & Generative Models

    ERIC Educational Resources Information Center

    Olman, Cheryl; Kersten, Daniel

    2004-01-01

    A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual…

  3. Digital implementation of a neural network for imaging

    NASA Astrophysics Data System (ADS)

    Wood, Richard; McGlashan, Alex; Yatulis, Jay; Mascher, Peter; Bruce, Ian

    2012-10-01

    This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.

  4. Multi-objective thermodynamic optimisation of supercritical CO2 Brayton cycles integrated with solar central receivers

    NASA Astrophysics Data System (ADS)

    Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes

    2018-01-01

    In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.

  5. Development of a hydraulic model of the human systemic circulation

    NASA Technical Reports Server (NTRS)

    Sharp, M. K.; Dharmalingham, R. K.

    1999-01-01

    Physical and numeric models of the human circulation are constructed for a number of objectives, including studies and training in physiologic control, interpretation of clinical observations, and testing of prosthetic cardiovascular devices. For many of these purposes it is important to quantitatively validate the dynamic response of the models in terms of the input impedance (Z = oscillatory pressure/oscillatory flow). To address this need, the authors developed an improved physical model. Using a computer study, the authors first identified the configuration of lumped parameter elements in a model of the systemic circulation; the result was a good match with human aortic input impedance with a minimum number of elements. Design, construction, and testing of a hydraulic model analogous to the computer model followed. Numeric results showed that a three element model with two resistors and one compliance produced reasonable matching without undue complication. The subsequent analogous hydraulic model included adjustable resistors incorporating a sliding plate to vary the flow area through a porous material and an adjustable compliance consisting of a variable-volume air chamber. The response of the hydraulic model compared favorably with other circulation models.

  6. An Intelligent Crop Planning Tool for Controlled Ecological Life Support Systems

    NASA Technical Reports Server (NTRS)

    Whitaker, Laura O.; Leon, Jorge

    1996-01-01

    This paper describes a crop planning tool developed for the Controlled Ecological Life Support Systems (CELSS) project which is in the research phases at various NASA facilities. The Crop Planning Tool was developed to assist in the understanding of the long term applications of a CELSS environment. The tool consists of a crop schedule generator as well as a crop schedule simulator. The importance of crop planning tools such as the one developed is discussed. The simulator is outlined in detail while the schedule generator is touched upon briefly. The simulator consists of data inputs, plant and human models, and various other CELSS activity models such as food consumption and waste regeneration. The program inputs such as crew data and crop states are discussed. References are included for all nominal parameters used. Activities including harvesting, planting, plant respiration, and human respiration are discussed using mathematical models. Plans provided to the simulator by the plan generator are evaluated for their 'fitness' to the CELSS environment with an objective function based upon daily reservoir levels. Sample runs of the Crop Planning Tool and future needs for the tool are detailed.

  7. Action-specific effects in aviation: what determines judged runway size?

    PubMed

    Gray, Rob; Navia, José Antonio; Allsop, Jonathan

    2014-01-01

    Several recent studies have shown that the performance of a skill that involves acting on a goal object can influence one's judgment of the size of that object. The present study investigated this effect in an aviation context. Novice pilots were asked to perform a series of visual approach and landing manoeuvres in a flight simulator. After each landing, participants next performed a task in which runway size was judged for different simulated altitudes. Gaze behaviour and control stick kinematics were also analyzed. There were significant relationships between judged runway size and multiple action-related variables including touchdown velocity, time fixating the runway, and the magnitude and frequency of control inputs. These findings suggest that relationship between the perception of a target object and action is not solely determined by performance success or failure but rather involves a relationship between multiple variables that reflect the actor's ability.

  8. CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software

    NASA Astrophysics Data System (ADS)

    Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team

    2018-01-01

    CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.

  9. Object-oriented sequence analysis: SCL--a C++ class library.

    PubMed

    Vahrson, W; Hermann, K; Kleffe, J; Wittig, B

    1996-04-01

    SCL (Sequence Class Library) is a class library written in the C++ programming language. Designed using object-oriented programming principles, SCL consists of classes of objects performing tasks typically needed for analyzing DNA or protein sequences. Among them are very flexible sequence classes, classes accessing databases in various formats, classes managing collections of sequences, as well as classes performing higher-level tasks like calculating a pairwise sequence alignment. SCL also includes classes that provide general programming support, like a dynamically growing array, sets, matrices, strings, classes performing file input/output, and utilities for error handling. By providing these components, SCL fosters an explorative programming style: experimenting with algorithms and alternative implementations is encouraged rather than punished. A description of SCL's overall structure as well as an overview of its classes is given. Important aspects of the work with SCL are discussed in the context of a sample program.

  10. Multidisciplinary conceptual design optimization of aircraft using a sound-matching-based objective function

    NASA Astrophysics Data System (ADS)

    Diez, Matteo; Iemma, Umberto

    2012-05-01

    The article presents a novel approach to include community noise considerations based on sound quality in the Multidisciplinary Conceptual Design Optimization (MCDO) of civil transportation aircraft. The novelty stems from the use of an unconventional objective function, defined as a measure of the difference between the noise emission of the aircraft under analysis and a reference 'weakly annoying' noise, the target sound. The minimization of such a merit factor yields an aircraft concept with a noise signature as close as possible to the given target. The reference sound is one of the outcomes of the European Research Project SEFA (Sound Engineering For Aircraft, VI Framework Programme, 2004-2007), and used here as an external input. The aim of the present work is to address the definition and the inclusion of the sound-matching-based objective function in the MCDO of aircraft.

  11. Implementation of the century ecosystem model for an eroding hillslope in Mississippi

    USGS Publications Warehouse

    Sharpe, Jodie; Harden, Jennifer W.; Dabney, Seth M.; Ojima, Dennis; Parton, William

    1998-01-01

    The objective of this study was to parameterize and implement the Century ecosystem model for an eroding, cultivated site near Senatobia, in Panola County, Mississippi, in order to understand the loss and replacement of soil organic carbon on an eroding cropland. The sites chosen for this study are located on highly eroded loess soils where USDA has conducted studies on rates of soil erosion. We used USDA sediment data from the study site and historical erosion estimates from the nearby area as model input for soil loss; in addition, inputs for parametization include particle-size data, climate data, and rainfall/runoff data that were collected and reported in companion papers. A cropping scenario was implemented to simulate a research site at the USDA watershed 2 at the Nelson Farm. Model output was compiled for comparison with data collected and reported in companion reports; interpretive comparisons are reported in Harden et al, in press.

  12. Hybrid powertrain system

    DOEpatents

    Hughes, Douglas A.

    2006-08-01

    A powertrain system is provided that includes a first prime mover and change-gear transmission having a first input shaft and a second input shaft. A twin clutch is disposed between the first prime mover and the transmission. The twin clutch includes a first main clutch positioned between the first prime mover and the first input shaft and a second main clutch positioned between the first prime mover and the second input shaft. The powertrain system also includes a second prime mover operably connected to one of the first and second input shafts.

  13. Hybrid powertrain system

    DOEpatents

    Hughes, Douglas A.

    2007-09-25

    A powertrain system is provided that includes a first prime mover and change-gear transmission having a first input shaft and a second input shaft. A twin clutch is disposed between the first prime mover and the transmission. The twin clutch includes a first main clutch positioned between the first prime mover and the first input shaft and a second main clutch positioned between the first prime mover and the second input shaft. The powertrain system also includes a second prime mover operably connected to one of the first and second input shafts.

  14. Continuous-Time Bilinear System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2003-01-01

    The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.

  15. Tool actuation and force feedback on robot-assisted microsurgery system

    NASA Technical Reports Server (NTRS)

    Das, Hari (Inventor); Ohm, Tim R. (Inventor); Boswell, Curtis D. (Inventor); Steele, Robert D. (Inventor)

    2002-01-01

    An input control device with force sensors is configured to sense hand movements of a surgeon performing a robot-assisted microsurgery. The sensed hand movements actuate a mechanically decoupled robot manipulator. A microsurgical manipulator, attached to the robot manipulator, is activated to move small objects and perform microsurgical tasks. A force-feedback element coupled to the robot manipulator and the input control device provides the input control device with an amplified sense of touch in the microsurgical manipulator.

  16. Nitrogen balance in response to dryland crop rotations and cultural practices

    USDA-ARS?s Scientific Manuscript database

    Nitrogen balance provides a measure of agroecosystem performance and environmental sustainability by taking into accounts of N inputs and outputs and N retention in the soil. The objective of this study was to evaluate N balance based on N inputs and outputs and soil N sequestration after 7 yr in re...

  17. Natural Resource Information System. Remote Sensing Studies.

    ERIC Educational Resources Information Center

    Leachtenauer, J.; And Others

    A major design objective of the Natural Resource Information System entailed the use of remote sensing data as an input to the system. Potential applications of remote sensing data were therefore reviewed and available imagery interpreted to provide input to a demonstration data base. A literature review was conducted to determine the types and…

  18. Implicit kernel sparse shape representation: a sparse-neighbors-based objection segmentation framework.

    PubMed

    Yao, Jincao; Yu, Huimin; Hu, Roland

    2017-01-01

    This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.

  19. Apparatus for Direct Optical Fiber Through-Lens Illumination of Microscopy or Observational Objects

    NASA Technical Reports Server (NTRS)

    Kadogawa, Hiroshi (Inventor)

    2001-01-01

    In one embodiment of the invention, a microscope or other observational apparatus, comprises a hollow tube, a lens mounted to the tube, a light source and at least one flexible optical fiber having an input end and an output end. The input end is positioned to receive light from the light source, and the output end is positioned within the tube so as to directly project light along a straight path to the lens to illuminate an object to be viewed. The path of projected light is uninterrupted and free of light deflecting elements. By passing the light through the lens, the light can be diffused or otherwise defocused to provide more uniform illumination across the surface of the object, increasing the quality of the image of the object seen by the viewer. The direct undeflected and uninterrupted projection of light, without change of direction, eliminates the need for light-deflecting elements, such as beam-splitters, mirrors, prisms, or the like, to direct the projected light towards the object.

  20. Economic analysis of electronic waste recycling: modeling the cost and revenue of a materials recovery facility in California.

    PubMed

    Kang, Hai-Yong; Schoenung, Julie M

    2006-03-01

    The objectives of this study are to identify the various techniques used for treating electronic waste (e-waste) at material recovery facilities (MRFs) in the state of California and to investigate the costs and revenue drivers for these techniques. The economics of a representative e-waste MRF are evaluated by using technical cost modeling (TCM). MRFs are a critical element in the infrastructure being developed within the e-waste recycling industry. At an MRF, collected e-waste can become marketable output products including resalable systems/components and recyclable materials such as plastics, metals, and glass. TCM has two main constituents, inputs and outputs. Inputs are process-related and economic variables, which are directly specified in each model. Inputs can be divided into two parts: inputs for cost estimation and for revenue estimation. Outputs are the results of modeling and consist of costs and revenues, distributed by unit operation, cost element, and revenue source. The results of the present analysis indicate that the largest cost driver for the operation of the defined California e-waste MRF is the materials cost (37% of total cost), which includes the cost to outsource the recycling of the cathode ray tubes (CRTs) (dollar 0.33/kg); the second largest cost driver is labor cost (28% of total cost without accounting for overhead). The other cost drivers are transportation, building, and equipment costs. The most costly unit operation is cathode ray tube glass recycling, and the next are sorting, collecting, and dismantling. The largest revenue source is the fee charged to the customer; metal recovery is the second largest revenue source.

  1. High dynamic range charge measurements

    DOEpatents

    De Geronimo, Gianluigi

    2012-09-04

    A charge amplifier for use in radiation sensing includes an amplifier, at least one switch, and at least one capacitor. The switch selectively couples the input of the switch to one of at least two voltages. The capacitor is electrically coupled in series between the input of the amplifier and the input of the switch. The capacitor is electrically coupled to the input of the amplifier without a switch coupled therebetween. A method of measuring charge in radiation sensing includes selectively diverting charge from an input of an amplifier to an input of at least one capacitor by selectively coupling an output of the at least one capacitor to one of at least two voltages. The input of the at least one capacitor is operatively coupled to the input of the amplifier without a switch coupled therebetween. The method also includes calculating a total charge based on a sum of the amplified charge and the diverted charge.

  2. System and method to determine electric motor efficiency nonintrusively

    DOEpatents

    Lu, Bin [Kenosha, WI; Habetler, Thomas G [Snellville, GA; Harley, Ronald G [Lawrenceville, GA

    2011-08-30

    A system and method for nonintrusively determining electric motor efficiency includes a processor programed to, while the motor is in operation, determine a plurality of stator input currents, electrical input data, a rotor speed, a value of stator resistance, and an efficiency of the motor based on the determined rotor speed, the value of stator resistance, the plurality of stator input currents, and the electrical input data. The determination of the rotor speed is based on one of the input power and the plurality of stator input currents. The determination of the value of the stator resistance is based on at least one of a horsepower rating and a combination of the plurality of stator input currents and the electrical input data. The electrical input data includes at least one of an input power and a plurality of stator input voltages.

  3. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  4. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  5. OpenDrift - an open source framework for ocean trajectory modeling

    NASA Astrophysics Data System (ADS)

    Dagestad, Knut-Frode; Breivik, Øyvind; Ådlandsvik, Bjørn

    2016-04-01

    We will present a new, open source tool for modeling the trajectories and fate of particles or substances (Lagrangian Elements) drifting in the ocean, or even in the atmosphere. The software is named OpenDrift, and has been developed at Norwegian Meteorological Institute in cooperation with Institute of Marine Research. OpenDrift is a generic framework written in Python, and is openly available at https://github.com/knutfrode/opendrift/. The framework is modular with respect to three aspects: (1) obtaining input data, (2) the transport/morphological processes, and (3) exporting of results to file. Modularity is achieved through well defined interfaces between components, and use of a consistent vocabulary (CF conventions) for naming of variables. Modular input implies that it is not necessary to preprocess input data (e.g. currents, wind and waves from Eulerian models) to a particular file format. Instead "reader modules" can be written/used to obtain data directly from any original source, including files or through web based protocols (e.g. OPeNDAP/Thredds). Modularity of processes implies that a model developer may focus on the geophysical processes relevant for the application of interest, without needing to consider technical tasks such as reading, reprojecting, and colocating input data, rotation and scaling of vectors and model output. We will show a few example applications of using OpenDrift for predicting drifters, oil spills, and search and rescue objects.

  6. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  7. A Novel Active Imaging Model to Design Visual Systems: A Case of Inspection System for Specular Surfaces

    PubMed Central

    Azorin-Lopez, Jorge; Fuster-Guillo, Andres; Saval-Calvo, Marcelo; Mora-Mora, Higinio; Garcia-Chamizo, Juan Manuel

    2017-01-01

    The use of visual information is a very well known input from different kinds of sensors. However, most of the perception problems are individually modeled and tackled. It is necessary to provide a general imaging model that allows us to parametrize different input systems as well as their problems and possible solutions. In this paper, we present an active vision model considering the imaging system as a whole (including camera, lighting system, object to be perceived) in order to propose solutions to automated visual systems that present problems that we perceive. As a concrete case study, we instantiate the model in a real application and still challenging problem: automated visual inspection. It is one of the most used quality control systems to detect defects on manufactured objects. However, it presents problems for specular products. We model these perception problems taking into account environmental conditions and camera parameters that allow a system to properly perceive the specific object characteristics to determine defects on surfaces. The validation of the model has been carried out using simulations providing an efficient way to perform a large set of tests (different environment conditions and camera parameters) as a previous step of experimentation in real manufacturing environments, which more complex in terms of instrumentation and more expensive. Results prove the success of the model application adjusting scale, viewpoint and lighting conditions to detect structural and color defects on specular surfaces. PMID:28640211

  8. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  9. The prioritisation of invasive alien plant control projects using a multi-criteria decision model informed by stakeholder input and spatial data.

    PubMed

    Forsyth, G G; Le Maitre, D C; O'Farrell, P J; van Wilgen, B W

    2012-07-30

    Invasions by alien plants are a significant threat to the biodiversity and functioning of ecosystems and the services they provide. The South African Working for Water program was established to address this problem. It needs to formulate objective and transparent priorities for clearing in the face of multiple and sometimes conflicting demands. This study used the analytic hierarchy process (a multi-criteria decision support technique) to develop and rank criteria for prioritising alien plant control operations in the Western Cape, South Africa. Stakeholder workshops were held to identify a goal and criteria and to conduct pair-wise comparisons to weight the criteria with respect to invasive alien plant control. The combination of stakeholder input (to develop decision models) with data-driven model solutions enabled us to include many alternatives (water catchments), that would otherwise not have been feasible. The most important criteria included the capacity to maintain gains made through control operations, the potential to enhance water resources and conserve biodiversity, and threats from priority invasive alien plant species. We selected spatial datasets and used them to generate weights that could be used to objectively compare alternatives with respect to agreed criteria. The analysis showed that there are many high priority catchments which are not receiving any funding and low priority catchments which are receiving substantial allocations. Clearly, there is a need for realigning priorities, including directing sufficient funds to the highest priority catchments to provide effective control. This approach provided a tractable, consensus-based solution that can be used to direct clearing operations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Hindsight Bias.

    PubMed

    Roese, Neal J; Vohs, Kathleen D

    2012-09-01

    Hindsight bias occurs when people feel that they "knew it all along," that is, when they believe that an event is more predictable after it becomes known than it was before it became known. Hindsight bias embodies any combination of three aspects: memory distortion, beliefs about events' objective likelihoods, or subjective beliefs about one's own prediction abilities. Hindsight bias stems from (a) cognitive inputs (people selectively recall information consistent with what they now know to be true and engage in sensemaking to impose meaning on their own knowledge), (b) metacognitive inputs (the ease with which a past outcome is understood may be misattributed to its assumed prior likelihood), and (c) motivational inputs (people have a need to see the world as orderly and predictable and to avoid being blamed for problems). Consequences of hindsight bias include myopic attention to a single causal understanding of the past (to the neglect of other reasonable explanations) as well as general overconfidence in the certainty of one's judgments. New technologies for visualizing and understanding data sets may have the unintended consequence of heightening hindsight bias, but an intervention that encourages people to consider alternative causal explanations for a given outcome can reduce hindsight bias. © The Author(s) 2012.

  11. Life comparative analysis of energy consumption and CO₂ emissions of different building structural frame types.

    PubMed

    Kim, Sangyong; Moon, Joon-Ho; Shin, Yoonseok; Kim, Gwang-Hee; Seo, Deok-Seok

    2013-01-01

    The objective of this research is to quantitatively measure and compare the environmental load and construction cost of different structural frame types. Construction cost also accounts for the costs of CO₂ emissions of input materials. The choice of structural frame type is a major consideration in construction, as this element represents about 33% of total building construction costs. In this research, four constructed buildings were analyzed, with these having either reinforced concrete (RC) or steel (S) structures. An input-output framework analysis was used to measure energy consumption and CO₂ emissions of input materials for each structural frame type. In addition, the CO₂ emissions cost was measured using the trading price of CO₂ emissions on the International Commodity Exchange. This research revealed that both energy consumption and CO₂ emissions were, on average, 26% lower with the RC structure than with the S structure, and the construction costs (including the CO₂ emissions cost) of the RC structure were about 9.8% lower, compared to the S structure. This research provides insights through which the construction industry will be able to respond to the carbon market, which is expected to continue to grow in the future.

  12. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    NASA Astrophysics Data System (ADS)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  13. Self-Motion and the Shaping of Sensory Signals

    PubMed Central

    Jenks, Robert A.; Vaziri, Ashkan; Boloori, Ali-Reza

    2010-01-01

    Sensory systems must form stable representations of the external environment in the presence of self-induced variations in sensory signals. It is also possible that the variations themselves may provide useful information about self-motion relative to the external environment. Rats have been shown to be capable of fine texture discrimination and object localization based on palpation by facial vibrissae, or whiskers, alone. During behavior, the facial vibrissae brush against objects and undergo deflection patterns that are influenced both by the surface features of the objects and by the animal's own motion. The extent to which behavioral variability shapes the sensory inputs to this pathway is unknown. Using high-resolution, high-speed videography of unconstrained rats running on a linear track, we measured several behavioral variables including running speed, distance to the track wall, and head angle, as well as the proximal vibrissa deflections while the distal portions of the vibrissae were in contact with periodic gratings. The measured deflections, which serve as the sensory input to this pathway, were strongly modulated both by the properties of the gratings and the trial-to-trial variations in head-motion and locomotion. Using presumed internal knowledge of locomotion and head-rotation, gratings were classified using short-duration trials (<150 ms) from high-frequency vibrissa motion, and the continuous trajectory of the animal's own motion through the track was decoded from the low frequency content. Together, these results suggest that rats have simultaneous access to low- and high-frequency information about their environment, which has been shown to be parsed into different processing streams that are likely important for accurate object localization and texture coding. PMID:20164407

  14. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  15. Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.

    PubMed

    Lawson, Rebecca

    2014-02-01

    The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.

  16. Object activation in semantic memory from visual multimodal feature input.

    PubMed

    Kraut, Michael A; Kremen, Sarah; Moo, Lauren R; Segal, Jessica B; Calhoun, Vincent; Hart, John

    2002-01-01

    The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture-word). A previous study using word-word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture-word pairs that are features of objects, with the task being to decide if together they "activated" an object not explicitly presented (e.g., picture of a candle and the word "icing" activate the internal representation of a "cake"). For picture-word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.

  17. Development of Geriatric Mental Health Learning Objectives for Medical Students: A Response to the Institute of Medicine 2012 Report.

    PubMed

    Lehmann, Susan W; Brooks, William B; Popeo, Dennis; Wilkins, Kirsten M; Blazek, Mary C

    2017-10-01

    America is aging as the population of older adults increases. The shortage of geriatric mental health specialists means that most geriatric mental healthcare will be provided by physicians who do not have specialty training in geriatrics. The Institute of Medicine Report of 2012 highlighted the urgent need for development of national competencies and curricula in geriatric mental health for all clinicians. Virtually all physicians can expect to treat older patients with mental health symptoms, yet currently there are no widely accepted learning objectives in geriatric mental health specific for medical students. The authors describe the development of a set of such learning objectives that all medical students should achieve by graduation. The iterative process included initial drafting by content experts from five medical schools with input and feedback from a wider group of geriatric psychiatrists, geriatricians, internists, and medical educators. The final document builds upon previously published work and includes specific knowledge, attitudes and skills in six key domains: Normal Aging, Mental Health Assessment of the Geriatric Patient, Psychopharmacology, Delirium, Depression, and Dementia. These objectives address a pressing need, providing a framework for national standards and curriculum development. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  19. Power selective optical filter devices and optical systems using same

    DOEpatents

    Koplow, Jeffrey P

    2014-10-07

    In an embodiment, a power selective optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes at least one substantially zero-order, zero-wave plate. The zero-order, zero-wave plate is configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. The zero-order, zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.

  20. Multiplexer and time duration measuring circuit

    DOEpatents

    Gray, Jr., James

    1980-01-01

    A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.

  1. Method and system for edge cladding of laser gain media

    DOEpatents

    Bayramian, Andrew James; Caird, John Allyn; Schaffers, Kathleen Irene

    2014-03-25

    A gain medium operable to amplify light at a gain wavelength and having reduced transverse ASE includes an input surface and an output surface opposing the input surface. The gain medium also includes a central region including gain material and extending between the input surface and the output surface along a longitudinal optical axis of the gain medium. The gain medium further includes an edge cladding region surrounding the central region and extending between the input surface and the output surface along the longitudinal optical axis of the gain medium. The edge cladding region includes the gain material and a dopant operable to absorb light at the gain wavelength.

  2. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  3. Object Correspondence across Brief Occlusion Is Established on the Basis of both Spatiotemporal and Surface Feature Cues

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Franconeri, Steven L.

    2009-01-01

    The "correspondence problem" is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established…

  4. Dual side control for inductive power transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron

    An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less

  5. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  6. Semantic Image Segmentation with Contextual Hierarchical Models.

    PubMed

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  7. Spacecraft rendezvous operational considerations affecting vehicle systems design and configuration

    NASA Astrophysics Data System (ADS)

    Prust, Ellen E.

    One lesson learned from Orbiting Maneuvering Vehicle (OMV) program experience is that Design Reference Missions must include an appropriate balance of operations and performance inputs to effectively drive vehicle systems design and configuration. Rendezvous trajectory design is based on vehicle characteristics (e.g., mass, propellant tank size, and mission duration capability) and operational requirements, which have evolved through the Gemini, Apollo, and STS programs. Operational constraints affecting the rendezvous final approach are summarized. The two major objectives of operational rendezvous design are vehicle/crew safety and mission success. Operational requirements on the final approach which support these objectives include: tracking/targeting/communications; trajectory dispersion and navigation uncertainty handling; contingency protection; favorable sunlight conditions; acceptable relative state for proximity operations handover; and compliance with target vehicle constraints. A discussion of the ways each of these requirements may constrain the rendezvous trajectory follows. Although the constraints discussed apply to all rendezvous, the trajectory presented in 'Cargo Transfer Vehicle Preliminary Reference Definition' (MSFC, May 1991) was used as the basis for the comments below.

  8. How to Assess Quality of Research in Iran, From Input to Impact? Introduction of Peer-Based Research Evaluation Model in Iran.

    PubMed

    Ebadifar, Asghar; Baradaran Eftekhari, Monir; Owlia, Parviz; Habibi, Elham; Ghalenoee, Elham; Bagheri, Mohammad Reza; Falahat, Katayoun; Eltemasi, Masoumeh; Sobhani, Zahra; Akhondzadeh, Shahin

    2017-11-01

    Research evaluation is a systematic and objective process to measure relevance, efficiency and effectiveness of research activities, and peer review is one of the most important tools for assessing quality of research. The aim of this study was introducing research evaluation indicators based on peer reviewing. This study was implemented in 4 stages. A list of objective-oriented evaluation indicators were designed in 4 axes, including; governance and leadership, structure, knowledge production and research impact. The top 10% medical sciences research centers (RCs) were evaluated based on peer review. Adequate equipment and laboratory instruments, high quality research publication and national or international cooperation were the main strengths in medical sciences RCs and the most important weaknesses included failure to adhere to strategic plans, parallel actions in similar fields, problems in manpower recruitment, knowledge translation & exchange (KTE) in service providers and policy makers' levels. Peer review evaluation can improve the quality of research.

  9. Comparing long-term projections of the space debris environment to real world data - Looking back to 1990

    NASA Astrophysics Data System (ADS)

    Radtke, Jonas; Stoll, Enrico

    2016-10-01

    Long-term projections of the space debris environment are commonly used to assess the trends within different scenarios for the assumed future development of spacefaring. General scenarios investigated include business-as-usual cases in which spaceflight is performed as today and mitigation scenarios, assuming the implementation of Space Debris Mitigation Guidelines at different advances or the effectiveness of more drastic measures, such as active debris removal. One problem that always goes along with the projection of a system's behaviour in the future is that affecting parameters, such as the launch rate, are unpredictable. It is common to look backwards and re-model the past in other fields of research. This is a rather difficult task for spaceflight as it is still quite young, and furthermore mostly influenced by drastic politic changes, as the break-down of the Soviet Union in the end of the 1980s. Furthermore, one major driver of the evolution of the number of on-orbit objects turn out to be collisions between objects. As of today, these collisions are, fortunately, very rare and therefore, a real-world-data modelling approach is difficult. Nevertheless, since the end of the cold war more than 20 years of a comparably stable evolution of spaceflight activities have passed. For this study, this period is used in a comparison between the real evolution of the space debris environment and that one projected using the Institute of Space System's in-house tool for long-term assessment LUCA (Long-Term Utility for Collision Analysis). Four different scenarios are investigated in this comparison; all of them have the common starting point of using an initial population for 1st May 1989. The first scenario, which serves as reference, is simply taken from MASTER-2009. All launch and mission related objects from the Two Line Elements (TLE) catalogue and other available sources are included. All events such as explosion and collision events have been re-modelled as close to the reality as possible and included in the corresponding population. They furthermore have been correlated with TLE catalogue objects. As the latest available validated population snapshot for MASTER is May 2009, this epoch is chosen as endpoint for the simulations. The second scenario uses the knowledge of the past 25 years to perform a Monte-Carlo simulation of the evolution of the space debris environment. Necessary input parameters such as explosions per year, launch rates, and the evolution of the solar cycle are taken from their real evolutions. The third scenario goes a step further by only extracting mean numbers and trends from inputs such as launch and explosion rates and applying them. The final and fourth scenario aims to disregarding all knowledge of the time frame under investigation and inputs are determined based on data available in 1989 only. Results are compared to the reference scenario of the space debris environment.

  10. An examination of water quality indicators in swim sites located in the upper Los Angeles River Watershed

    NASA Astrophysics Data System (ADS)

    Lee, C. M.; Morris, K.; Fingland, N. K.; Johnstone, K.; Pendleton, L.; Ponce, A.; Tang, C.; Griffith, J. F.; Steele, N. L.

    2013-12-01

    Multiple sites in the upper Los Angeles River watershed were sampled during summer 2012 and measured for Escherichia coli, enterococci, and Clostridium perfringens (vegetative cells and spores) using culture-based analyses and preserved for quantitative polymerase chain reaction (qPCR) analysis. The objective of this work includes the characterization of how well indicators correlated with each other, with respect to background levels and to 'spikes' from background, possibly indicative of a pollution input, environmental/physicochemical parameters, as well as in the context of recreational water quality standards. The 2nd objective of this work was to evaluate the economic impact of implementing qPCR at our study sites for rapid water quality monitoring. None of the species of indicators correlated well with each other (R2 < 0.1) across sites and dates when the sample set was examined in its entirety, though C. perfringens vegetative cells and spores were moderately correlated (R2 = 0.31, p = 0.07). The observation of concentration 'spikes' against background levels, suggesting a potential input of contamination, were observed on holiday sampling days and will be examined further. In general, the number of swimmers present was not linked with indicator concentrations; however, incidence of water quality exceedances (for E. coli 235 CFU or MPN/100 mL sample) were more likely to occur on the weekend or holidays (for E. coli, , suggesting that the presence/absence of swimmers may be an important variable at our sites. Clostridium perfringens may be a useful indicator at our study sites, as a comparison of vegetative to endospore forms of this organism may be used to understand how recently a contamination event or input occurred.

  11. Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing

    DTIC Science & Technology

    2014-09-30

    scale influence of the Great barrier reef matrix on wave attenuation, Coral Reefs [published, refereed] Ghantous, M., and A.V. Babanin, 2014: One...Observation-Based Dissipation and Input Terms for Spectral Wave Models...functions, based on advanced understanding of physics of air-sea interactions, wave breaking and swell attenuation, in wave - forecast models. OBJECTIVES The

  12. First Parity Evaluation of Body Condition, Weight, and Blood Beta-Hydroxybutyrate During Lactation of Range Cows Developed in the Same Ecophysiological System but Receiving Different Harvested Feed Inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility for producers. The objective of this study was to measure body condition and weight as well as blood beta-hydroxybutyrate (BHB) concentrations for primiparous beef ...

  13. First parity evaluation of body condition, weight, and blood beta-hydroxybutyrate during lactation of range cows developed in the same ecophysiological system but receiving different harvested feed inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility for producers. The objective of this study was to measure body condition and weight as well as blood beta-hydroxybutyrate (BHB) concentrations for primiparous beef ...

  14. First parity evaluation of peak milk yield for range cows developed in the same ecophysiological system but receiving different concentrations of harvested feed inputs

    USDA-ARS?s Scientific Manuscript database

    Reduction of harvested feed inputs during heifer development could optimize range livestock production and improve economic feasibility. The objective for this two year study was to measure milk production (kg/d) and milk constituent concentrations (g/d) for 16 primiparous beef cows each year that w...

  15. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  16. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  17. Oblique reconstructions in tomosynthesis. II. Super-resolution

    PubMed Central

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest detectable frequency on pitch followed the same trend as the analytical model. It was demonstrated that super-resolution is not achievable if the pitch of the object approaches 90°, corresponding to the case in which the test frequency is perpendicular to the breast support. Only low frequency objects are detectable at pitches close to 90°. Conclusions: This work provides a platform for investigating super-resolution in oblique reconstructions for tomosynthesis. In breast imaging, this study should have applications in visualizing microcalcifications and other subtle signs of cancer. PMID:24320445

  18. Spacelab uplink/downlink data flow and formats

    NASA Technical Reports Server (NTRS)

    Kandefer, F.

    1978-01-01

    The results of an analysis of the Spacelab (SL) data uplink/downlink structure and those data system elements associated with the support of this data flow are presented. Specific objectives of this report are to present the results of the following analyses: (1) operations of the SL high rate multiplexer, including format structure, data rates, format combinations, format switching, etc.; (2) operations of SL data recorders to include the definition of modes, data rates and forms; (3) operations of the high rate demultiplexer as described above; (4) potential experiment data formats defining formatting parameters to be considered in decommutation analysis; (5) SL computer input/output (I/O) decommutation channels, including the definition of structure, quantity and use of this I/O data; (6) detailed requirements of the data quality monitoring philosophy for this function.

  19. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  20. "America Responds to AIDS": its content, development process, and outcome.

    PubMed

    Woods, D R; Davis, D; Westover, B J

    1991-01-01

    During the 1987-90 period, five phases of new AIDS information materials were released to the general public in the ARTA campaign, including a national mailer. The five were "General Awareness: Humanizing AIDS" in October 1987, "Understanding AIDS," the national mailout, April 1988, "Women at Risk/Multiple Partner, Sexually Active Adults," October 1988, "Parents and Youth," May 1989, and "Preventing HIV Infection and AIDS: Taking The Next Steps," July 1990. From planning to implementation to evaluation, ARTA is based on well-established theory and practice. Initially, the campaign was a response to an immediate crisis. It has evolved into the deliberate and systematic development of objectives to combat a chronic problem. ARTA represents one of the most comprehensive formative research processes in the history of public service campaigns. The dynamic process of carefully developing each new phase to include such important entities as State and local health agencies and community-based organizations is at least as important as the quality of the end materials. The objectives of each new phase are based on the needs of the public and of specific audiences. Maximum input from all relevant constituencies is obtained to ensure that they support the campaign's objectives and implementation strategy.

  1. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  2. Laparoscopic simulation interface

    DOEpatents

    Rosenberg, Louis B.

    2006-04-04

    A method and apparatus for providing high bandwidth and low noise mechanical input and output for computer systems. A gimbal mechanism provides two revolute degrees of freedom to an object about two axes of rotation. A linear axis member is coupled to the gimbal mechanism at the intersection of the two axes of rotation. The linear axis member is capable of being translated along a third axis to provide a third degree of freedom. The user object is coupled to the linear axis member and is thus translatable along the third axis so that the object can be moved along all three degrees of freedom. Transducers associated with the provided degrees of freedom include sensors and actuators and provide an electromechanical interface between the object and a digital processing system. Capstan drive mechanisms transmit forces between the transducers and the object. The linear axis member can also be rotated about its lengthwise axis to provide a fourth degree of freedom, and, optionally, a floating gimbal mechanism is coupled to the linear axis member to provide fifth and sixth degrees of freedom to an object. Transducer sensors are associated with the fourth, fifth, and sixth degrees of freedom. The interface is well suited for simulations of medical procedures and simulations in which an object such as a stylus or a joystick is moved and manipulated by the user.

  3. The Deflector Selector: A Machine Learning Framework for Prioritizing Hazardous Object Deflection Technology Development

    NASA Astrophysics Data System (ADS)

    Nesvold, Erika; Greenberg, Adam; Erasmus, Nicolas; Van Heerden, Elmarie; Galache, J. L.; Dahlstrom, Eric; Marchis, Franck

    2018-01-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We will present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We will describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  4. The Deflector Selector: A machine learning framework for prioritizing hazardous object deflection technology development

    NASA Astrophysics Data System (ADS)

    Nesvold, E. R.; Greenberg, A.; Erasmus, N.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.

    2018-05-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  5. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  6. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.

    Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less

  7. Objective visual assessment of antiangiogenic treatment for wet age-related macular degeneration.

    PubMed

    Baseler, Heidi A; Gouws, André; Crossland, Michael D; Leung, Carmen; Tufail, Adnan; Rubin, Gary S; Morland, Antony B

    2011-10-01

    To assess cortical responses in patients undergoing antiangiogenic treatment for wet age-related macular degeneration (AMD) using functional magnetic resonance imaging (fMRI) as an objective, fixation-independent measure of topographic visual function. A patient with bilateral neovascular AMD was scanned using fMRI before and at regular intervals while undergoing treatment with intravitreal antiangiogenic injections (ranibizumab). Blood oxygenation level-dependent signals were measured in the brain while the patient viewed a stimulus consisting of a full-field flickering (6 Hz) white light alternating with a uniform gray background (18 s on and 18 s off). Topographic distribution and magnitude of activation in visual cortex were compared longitudinally throughout the treatment period (<1 year) and with control patients not currently undergoing treatment. Clinical behavioral tests were also administered, including visual acuity, microperimetry, and reading skills. The area of visual cortex activated increased significantly after the first treatment to include more posterior cortex that normally receives inputs from lesioned parts of the retina. Subsequent treatments yielded no significant further increase in activation area. Behavioral measures all generally showed an improvement with treatment but did not always parallel one another. The untreated control patient showed a consistent lack of significant response in the cortex representing retinal lesions. Retinal treatments may not only improve vision but also result in a concomitant improvement in fixation stability. Current clinical behavioral measures (e.g., acuity and perimetry) are largely dependent on fixation stability and therefore cannot separate improvements of visual function from fixation improvements. fMRI, which provides an objective and sensitive measure of visual function independent of fixation, reveals a significant increase in visual cortical responses in patients with wet AMD after treatment with antiangiogenic injections. Despite recent evidence that visual cortex degenerates subsequent to retinal lesions, our results indicate that it can remain responsive as its inputs are restored.

  8. Logarithmic circuit with wide dynamic range

    NASA Technical Reports Server (NTRS)

    Wiley, P. H.; Manus, E. A. (Inventor)

    1978-01-01

    A circuit deriving an output voltage that is proportional to the logarithm of a dc input voltage susceptible to wide variations in amplitude includes a constant current source which forward biases a diode so that the diode operates in the exponential portion of its voltage versus current characteristic, above its saturation current. The constant current source includes first and second, cascaded feedback, dc operational amplifiers connected in negative feedback circuit. An input terminal of the first amplifier is responsive to the input voltage. A circuit shunting the first amplifier output terminal includes a resistor in series with the diode. The voltage across the resistor is sensed at the input of the second dc operational feedback amplifier. The current flowing through the resistor is proportional to the input voltage over the wide range of variations in amplitude of the input voltage.

  9. Planning for the Collection and Analysis of Samples of Martian Granular Materials Potentially to be Returned by Mars Sample Return

    NASA Astrophysics Data System (ADS)

    Carrier, B. L.; Beaty, D. W.

    2017-12-01

    NASA's Mars 2020 rover is scheduled to land on Mars in 2021 and will be equipped with a sampling system capable of collecting rock cores, as well as a specialized drill bit for collecting unconsolidated granular material. A key mission objective is to collect a set of samples that have enough scientific merit to justify returning to Earth. In the case of granular materials, we would like to catalyze community discussion on what we would do with these samples if they arrived in our laboratories, as input to decision-making related to sampling the regolith. Numerous scientific objectives have been identified which could be achieved or significantly advanced via the analysis of martian rocks, "regolith," and gas samples. The term "regolith" has more than one definition, including one that is general and one that is much more specific. For the purpose of this analysis we use the term "granular materials" to encompass the most general meaning and restrict "regolith" to a subset of that. Our working taxonomy includes the following: 1) globally sourced airfall dust (dust); 2) saltation-sized particles (sand); 3) locally sourced decomposed rock (regolith); 4) crater ejecta (ejecta); and, 5) other. Analysis of martian granular materials could serve to advance our understanding areas including habitability and astrobiology, surface-atmosphere interactions, chemistry, mineralogy, geology and environmental processes. Results of these analyses would also provide input into planning for future human exploration of Mars, elucidating possible health and mechanical hazards caused by the martian surface material, as well as providing valuable information regarding available resources for ISRU and civil engineering purposes. Results would also be relevant to matters of planetary protection and ground-truthing orbital observations. We will present a preliminary analysis of the following, in order to generate community discussion and feedback on all issues relating to: What are the specific reasons (and their priorities) for collecting samples of granular materials? How do those reasons translate to sampling priorities? In what condition would these samples be expected to be received? What is our best projection of the approach by which these samples would be divided, prepared, and analyzed to achieve our objectives?

  10. Proactive pavement asset management with climate change aspects

    NASA Astrophysics Data System (ADS)

    Zofka, Adam

    2018-05-01

    Pavement Asset Management System is a systematic and objective tool to manage pavement network based on the rational, engineering and economic principles. Once implemented and mature Pavement Asset Management System serves the entire range of users starting with the maintenance engineers and ending with the decision-makers. Such a system is necessary to coordinate agency management strategy including proactive maintenance. Basic inputs in the majority of existing Pavement Asset Management System approaches comprise the actual pavement inventory with associated construction history and condition, traffic information as well as various economical parameters. Some Pavement Management System approaches include also weather aspects which is of particular importance considering ongoing climate changes. This paper presents challenges in implementing the Pavement Asset Management System for those National Road Administrations that manage their pavement assets using more traditional strategies, e.g. worse-first approach. Special considerations are given to weather-related inputs and associated analysis to demonstrate the effects of climate change in a short- and long-term range. Based on the presented examples this paper concludes that National Road Administrations should account for the weather-related factors in their Pavement Management Systems as this has a significant impact on the system outcomes from the safety and economical perspective.

  11. A modified approach to controller partitioning

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Veillette, Robert J.

    1993-01-01

    The idea of computing a decentralized control law for the integrated flight/propulsion control of an aircraft by partitioning a given centralized controller is investigated. An existing controller partitioning methodology is described, and a modified approach is proposed with the objective of simplifying the associated controller approximation problem. Under the existing approach, the decentralized control structure is a variable in the partitioning process; by contrast, the modified approach assumes that the structure is fixed a priori. Hence, the centralized controller design may take the decentralized control structure into account. Specifically, the centralized controller may be designed to include all the same inputs and outputs as the decentralized controller; then, the two controllers may be compared directly, simplifying the partitioning process considerably. Following the modified approach, a centralized controller is designed for an example aircraft mode. The design includes all the inputs and outputs to be used in a specified decentralized control structure. However, it is shown that the resulting centralized controller is not well suited for approximation by a decentralized controller of the given structure. The results indicate that it is not practical in general to cast the controller partitioning problem as a direct controller approximation problem.

  12. Full wave modulator-demodulator amplifier apparatus. [for generating rectified output signal

    NASA Technical Reports Server (NTRS)

    Black, J. M. (Inventor)

    1974-01-01

    A full-wave modulator-demodulator apparatus is described including an operational amplifier having a first input terminal coupled to a circuit input terminal, and a second input terminal alternately coupled to the circuit input terminal. A circuit is ground by a switching circuit responsive to a phase reference signal and the operational amplifier is alternately switched between a non-inverting mode and an inverting mode. The switching circuit includes three field-effect transistors operatively associated to provide the desired switching function in response to an alternating reference signal of the same frequency as an AC input signal applied to the circuit input terminal.

  13. Biological Control of Appetite: A Daunting Complexity

    PubMed Central

    MacLean, Paul S.; Blundell, John E.; Mennella, Julie A.; Batterham, Rachel L.

    2017-01-01

    Objective This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) entitled, “Self-Regulation of Appetite, It's Complicated,” which focused on the biological aspects of appetite regulation. Methods Here we summarize the key biological inputs of appetite regulation and their implications for body weight regulation. Results These discussions offer an update of the long-held, rigid perspective of an “adipocentric” biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. We are only beginning to understand how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally, we consider a common theme that pervaded the workshop discussions, which was individual variability. Conclusions It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. PMID:28229538

  14. What the bird’s brain tells the bird’s eye: the function of descending input to the avian retina

    PubMed Central

    Wilson, Martin; Lindstrom, Sarah H.

    2012-01-01

    As Cajal discovered in the late 19th century, the bird retina receives a substantial input from the brain. Approximately 10,000 fibers originating in a small midbrain nucleus, the isthmo-optic nucleus, terminate in each retina. The input to the isthmo-optic nucleus is chiefly from the optic tectum which, in the bird, is the primary recipient of retinal input. These neural elements constitute a closed loop, the Centrifugal Visual System (CVS), beginning and ending in the retina, that delivers positive feedback to active ganglion cells. Several features of the system are puzzling. All fibers from the isthmo-optic nucleus terminate in the ventral retina and an unusual axon-bearing amacrine cell, the Target Cell, is the postsynaptic partner of these fibers. While the rest of the CVS is orderly and retinotopic, Target Cell axons project seemingly at random, mostly to distant parts of the retina. We review here the most significant features of the anatomy and physiology of the CVS with a view to understanding its function. We suggest that many of the facts about this system, including some that are otherwise difficult to explain, can be accommodated within the hypothesis that the images of shadows cast on the ground or on objects in the environment, initiate a rapid and parallel search of the sky for a possible aerial predator. If a predator is located, shadow and predator would be temporarily linked together and tracked by the CVS. PMID:21524338

  15. Wave-plate structures, power selective optical filter devices, and optical systems using same

    DOEpatents

    Koplow, Jeffrey P [San Ramon, CA

    2012-07-03

    In an embodiment, an optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes first and second substantially zero-order, zero-wave plates arranged in series with and oriented at an angle relative to each other. The first and second zero-wave plates are configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. Each zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.

  16. High frequency inductive lamp and power oscillator

    DOEpatents

    Kirkpatrick, Douglas A.; Gitsevich, Aleksandr

    2005-09-27

    An oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and a tuning circuit connected to the input of the amplifier, wherein the tuning circuit is continuously variable and consists of solid state electrical components with no mechanically adjustable devices including a pair of diodes connected to each other at their respective cathodes with a control voltage connected at the junction of the diodes. Another oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and transmission lines connected to the input of the amplifier with an input pad and a perpendicular transmission line extending from the input pad and forming a leg of a resonant "T", and wherein the feedback network is coupled to the leg of the resonant "T".

  17. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  18. Dyspraxia in a patient with corticobasal degeneration: the role of visual and tactile inputs to action

    PubMed Central

    Graham, N.; Zeman, A.; Young, A.; Patterson, K.; Hodges, J.

    1999-01-01

    OBJECTIVES—To investigate the roles of visual and tactile information in a dyspraxic patient with corticobasal degeneration (CBD) who showed dramatic facilitation in miming the use of a tool or object when he was given a tool to manipulate; and to study the nature of the praxic and neuropsychological deficits in CBD.
METHODS—The subject had clinically diagnosed CBD, and exhibited alien limb behaviour and striking ideomotor dyspraxia. General neuropsychological evaluation focused on constructional and visuospatial abilities, calculation, verbal fluency, episodic and semantic memory, plus spelling and writing because impairments in this domain were presenting complaints. Four experiments assessed the roles of visual and tactile information in the facilitation of motor performance by tools. Experiment 1 evaluated the patient's performance of six limb transitive actions under six conditions: (1) after he described the relevant tool from memory, (2) after he was shown a line drawing of the tool, (3) after he was shown a real exemplar of the tool, (4) after he watched the experimenter perform the action, (5) while he was holding the tool, and (6) immediately after he had performed the action with the tool but with the tool removed from his grasp. Experiment 2 evaluated the use of the same six tools when the patient had tactile but no visual information (while he was blindfolded). Experiments 3 and 4 assessed performance of actions appropriate to the same six tools when the patient had either neutral or inappropriate tactile feedback—that is, while he was holding a non-tool object or a different tool.
RESULTS—Miming of tool use was not facilitated by visual input; moreover, lack of visual information in the blindfolded condition did not reduce performance. The principal positive finding was a dramatic facilitation of the patient's ability to demonstrate object use when he was holding either the appropriate tool or a neutral object. Tools inappropriate to the requested action produced involuntary performance of the stimulus relevant action.
CONCLUSIONS—Tactile stimulation was paramount in the facilitation of motor performance in tool use by this patient with CBD. This outcome suggests that tactile information should be included in models which hypothesise modality specific inputs to the action production system. Significant impairments in spelling and letter production that have not previously been reported in CBD have also been documented.

 PMID:10449556

  19. New control concepts for uncertain water resources systems: 1. Theory

    NASA Astrophysics Data System (ADS)

    Georgakakos, Aris P.; Yao, Huaming

    1993-06-01

    A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.

  20. Performance evaluation of object based greenhouse detection from Sentinel-2 MSI and Landsat 8 OLI data: A case study from Almería (Spain)

    NASA Astrophysics Data System (ADS)

    Novelli, Antonio; Aguilar, Manuel A.; Nemmaoui, Abderrahim; Aguilar, Fernando J.; Tarantino, Eufemia

    2016-10-01

    This paper shows the first comparison between data from Sentinel-2 (S2) Multi Spectral Instrument (MSI) and Landsat 8 (L8) Operational Land Imager (OLI) headed up to greenhouse detection. Two closely related in time scenes, one for each sensor, were classified by using Object Based Image Analysis and Random Forest (RF). The RF input consisted of several object-based features computed from spectral bands and including mean values, spectral indices and textural features. S2 and L8 data comparisons were also extended using a common segmentation dataset extracted form VHR World-View 2 (WV2) imagery to test differences only due to their specific spectral contribution. The best band combinations to perform segmentation were found through a modified version of the Euclidian Distance 2 index. Four different RF classifications schemes were considered achieving 89.1%, 91.3%, 90.9% and 93.4% as the best overall accuracies respectively, evaluated over the whole study area.

  1. Waste Handeling Building Conceptual Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    G.W. Rowe

    2000-11-06

    The objective of the ''Waste Handling Building Conceptual Study'' is to develop proposed design requirements for the repository Waste Handling System in sufficient detail to allow the surface facility design to proceed to the License Application effort if the proposed requirements are approved by DOE. Proposed requirements were developed to further refine waste handling facility performance characteristics and design constraints with an emphasis on supporting modular construction, minimizing fuel inventory, and optimizing facility maintainability and dry handling operations. To meet this objective, this study attempts to provide an alternative design to the Site Recommendation design that is flexible, simple, reliable,more » and can be constructed in phases. The design concept will be input to the ''Modular Design/Construction and Operation Options Report'', which will address the overall program objectives and direction, including options and issues associated with transportation, the subsurface facility, and Total System Life Cycle Cost. This study (herein) is limited to the Waste Handling System and associated fuel staging system.« less

  2. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.

  3. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  4. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  5. Effects of input device and motion type on a cursor-positioning task.

    PubMed

    Yau, Yi-Jan; Hwang, Sheue-Ling; Chao, Chin-Jung

    2008-02-01

    Many studies have investigated the performance of using nonkey-board input devices under static situations, but few have considered the effects of motion type on manipulating these input devices. In this study comparison of 12 mens' performance using four input devices (three trackballs: currently used, trackman wheel, and erectly held trackballs, as well as a touch screen) under five motion types of static, heave, roll, pitch, and random movements was conducted. The input device and motion type significantly affected movement speed and accuracy, and their interaction significantly affected the movement speed. The touch screen was the fastest but the least accurate input device. The erectly held trackball was the slowest, whereas the error rate of the currently used trackball was the lowest. Impairments of the random motion on movement time and error rate were larger than those of other motion types. Considering objective and subjective evaluations, the trackman wheel and currently used trackball were more efficient in operation than the erectly held trackball and touch screen under the motion environments.

  6. Technologies for Fire and Damage Control and Condition Based Maintenance

    DTIC Science & Technology

    2011-12-01

    sheathing, thermal and acoustic insulation, furnishing, bedding, mattresses, flooring , and wood fibre (paper and cardboard) and plastic packaging...Condition Based Maintenance”. The project objective was to develop an improved understanding of how materials, sensors and sensor systems choices impact the...ultraviolet spectral sensors and an acoustic sensor. The system also has data fusion software that analyses the sensor input and determines if the input

  7. Lumped Nonlinear System Analysis with Volterra Series.

    DTIC Science & Technology

    1980-04-01

    f h2 (t-=,t-r )x(r)x(t2)dl d 2 (4- 1 )O0 0 Consider the input signal comprising two unit sinusoidal signals at fre- quencies wa and wb. The input x... 1 - 2 . Nonlinear System Analysis Methods. .............. 2 1 -3. Objectives of the Investigation ....... ............... 6 1 -4. Organization of...the Report ..... ... ................. 9 CHAPTER 2 - VOLTERRA FUNCTIONAL SERIES ...... ............... 12 2 - 1 . Introduction

  8. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  9. Three input concepts for flight crew interaction with information presented on a large-screen electronic cockpit display

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.

    1990-01-01

    A piloted simulation study was conducted comparing three different input methods for interfacing to a large-screen, multiwindow, whole-flight-deck display for management of transport aircraft systems. The thumball concept utilized a miniature trackball embedded in a conventional side-arm controller. The touch screen concept provided data entry through a capacitive touch screen. The voice concept utilized a speech recognition system with input through a head-worn microphone. No single input concept emerged as the most desirable method of interacting with the display. Subjective results, however, indicate that the voice concept was the most preferred method of data entry and had the most potential for future applications. The objective results indicate that, overall, the touch screen concept was the most effective input method. There was also significant differences between the time required to perform specific tasks and the input concept employed, with each concept providing better performance relative to a specific task. These results suggest that a system combining all three input concepts might provide the most effective method of interaction.

  10. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    NASA Astrophysics Data System (ADS)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  11. The automated ground network system

    NASA Technical Reports Server (NTRS)

    Smith, Miles T.; Militch, Peter N.

    1993-01-01

    The primary goal of the Automated Ground Network System (AGNS) project is to reduce Ground Network (GN) station life-cycle costs. To accomplish this goal, the AGNS project will employ an object-oriented approach to develop a new infrastructure that will permit continuous application of new technologies and methodologies to the Ground Network's class of problems. The AGNS project is a Total Quality (TQ) project. Through use of an open collaborative development environment, developers and users will have equal input into the end-to-end design and development process. This will permit direct user input and feedback and will enable rapid prototyping for requirements clarification. This paper describes the AGNS objectives, operations concept, and proposed design.

  12. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  13. Ergonomics Contribution in Maintainability

    NASA Astrophysics Data System (ADS)

    Teymourian, Kiumars; Seneviratne, Dammika; Galar, Diego

    2017-09-01

    The objective of this paper is to describe an ergonomics contribution in maintainability. The economical designs, inputs and training helps to increase the maintainability indicators for industrial devices. This analysis can be helpful, among other cases, to compare systems, to achieve a better design regarding maintainability requirements, to improve this maintainability under specific industrial environment and to foresee maintainability problems due to eventual changes in a device operation conditions. With this purpose, this work first introduces the notion of ergonomics and human factors, maintainability and the implementation of assessment of human postures, including some important postures to perform maintenance activities. A simulation approach is used to identify the critical posture of the maintenance personnel and implements the defined postures with minimal loads on the personnel who use the equipment in a practical scenario. The simulation inputs are given to the designers to improve the workplace/equipment in order to high level of maintainability. Finally, the work concludes summarizing the more significant aspects and suggesting future research.

  14. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  15. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  16. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  17. General purpose algorithms for characterization of slow and fast phase nystagmus

    NASA Technical Reports Server (NTRS)

    Lessard, Charles S.

    1987-01-01

    In the overall aim for a better understanding of the vestibular and optokinetic systems and their roles in space motion sickness, the eye movement responses to various dynamic stimuli are measured. The vestibulo-ocular reflex (VOR) and the optokinetic response, as the eye movement responses are known, consist of slow phase and fast phase nystagmus. The specific objective is to develop software programs necessary to characterize the vestibulo-ocular and optokinetic responses by distinguishing between the two phases of nystagmus. The overall program is to handle large volumes of highly variable data with minimum operator interaction. The programs include digital filters, differentiation, identification of fast phases, and reconstruction of the slow phase with a least squares fit such that sinusoidal or psuedorandom data may be processed with accurate results. The resultant waveform, slow phase velocity eye movements, serves as input data to the spectral analysis programs previously developed for NASA to analyze nystagmus responses to pseudorandom angular velocity inputs.

  18. 1991 NASA Life Support Systems Analysis workshop

    NASA Technical Reports Server (NTRS)

    Evanich, Peggy L.; Crabb, Thomas M.; Gartrell, Charles F.

    1992-01-01

    The 1991 Life Support Systems Analysis Workshop was sponsored by NASA Headquarters' Office of Aeronautics and Space Technology (OAST) to foster communication among NASA, industrial, and academic specialists, and to integrate their inputs and disseminate information to them. The overall objective of systems analysis within the Life Support Technology Program of OAST is to identify, guide the development of, and verify designs which will increase the performance of the life support systems on component, subsystem, and system levels for future human space missions. The specific goals of this workshop were to report on the status of systems analysis capabilities, to integrate the chemical processing industry technologies, and to integrate recommendations for future technology developments related to systems analysis for life support systems. The workshop included technical presentations, discussions, and interactive planning, with time allocated for discussion of both technology status and time-phased technology development recommendations. Key personnel from NASA, industry, and academia delivered inputs and presentations on the status and priorities of current and future systems analysis methods and requirements.

  19. Dynamic feature analysis for Voyager at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.

    1978-01-01

    Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.

  20. All optical logic for optical pattern recognition and networking applications

    NASA Astrophysics Data System (ADS)

    Khoury, Jed

    2017-05-01

    In this paper, we propose architectures for the implementation 16 Boolean optical gates from two inputs using externally pumped phase- conjugate Michelson interferometer. Depending on the gate to be implemented, some require single stage interferometer and others require two stages interferometer. The proposed optical gates can be used in several applications in optical networks including, but not limited to, all-optical packet routers switching, and all-optical error detection. The optical logic gates can also be used in recognition of noiseless rotation and scale invariant objects such as finger prints for home land security applications.

  1. Requirements Specification Language (RSL) and supporting tools

    NASA Technical Reports Server (NTRS)

    Frincke, Deborah; Wolber, Dave; Fisher, Gene; Cohen, Gerald C.

    1992-01-01

    This document describes a general purpose Requirement Specification Language (RSL). RSL is a hybrid of features found in several popular requirement specification languages. The purpose of RSL is to describe precisely the external structure of a system comprised of hardware, software, and human processing elements. To overcome the deficiencies of informal specification languages, RSL includes facilities for mathematical specification. Two RSL interface tools are described. The Browser view contains a complete document with all details of the objects and operations. The Dataflow view is a specialized, operation-centered depiction of a specification that shows how specified operations relate in terms of inputs and outputs.

  2. Meteorological and Environmental Inputs to Aviation Systems

    NASA Technical Reports Server (NTRS)

    Camp, Dennis W. (Editor); Frost, Walter (Editor)

    1988-01-01

    Reports on aviation meteorology, most of them informal, are presented by representatives of the National Weather Service, the Bracknell (England) Meteorological Office, the NOAA Wave Propagation Lab., the Fleet Numerical Oceanography Center, and the Aircraft Owners and Pilots Association. Additional presentations are included on aircraft/lidar turbulence comparison, lightning detection and locating systems, objective detection and forecasting of clear air turbulence, comparative verification between the Generalized Exponential Markov (GEM) Model and official aviation terminal forecasts, the evaluation of the Prototype Regional Observation and Forecast System (PROFS) mesoscale weather products, and the FAA/MIT Lincoln Lab. Doppler Weather Radar Program.

  3. Track/train dynamics test report transfer function test. Volume 1: Test

    NASA Technical Reports Server (NTRS)

    Vigil, R. A.

    1975-01-01

    A description is presented of the transfer function test performed on an open hopper freight car loaded with 80 tons of coal. Test data and a post-test update of the requirements document and test procedure are presented. Included are a statement of the test objective, a description of the test configurations, test facilities, test methods, data acquisition/reduction operations, and a chronological test summary. An index to the data for the three test configurations (X, Y, and Z-axis tests) is presented along with test sequence, run number, test reference, and input parameters.

  4. Finding the right compromise between productivity and environmental efficiency on high input tropical dairy farms: a case study.

    PubMed

    Berre, David; Blancard, Stéphane; Boussemart, Jean-Philippe; Leleu, Hervé; Tillard, Emmanuel

    2014-12-15

    This study focused on the trade-off between milk production and its environmental impact on greenhouse gas (GHG) emissions and nitrogen surplus in a high input tropical system. We first identified the objectives of the three main stakeholders in the dairy sector (farmers, a milk cooperative and environmentalists). The main aim of the farmers and cooperative's scenarios was to increase milk production without additional environmental deterioration but with the possibility of increasing the inputs for the cooperative. The environmentalist's objective was to reduce environmental deterioration. Second, we designed a sustainable intensification scenario combining maximization of milk production and minimization of environmental impacts. Third, the objectives for reducing the eco-inefficiency of dairy systems in Reunion Island were incorporated in a framework for activity analysis, which was used to model a technological approach with desirable and undesirable outputs. Of the four scenarios, the sustainable intensification scenario produced the best results, with a potential decrease of 238 g CO2-e per liter of milk (i.e. a reduction of 13.93% compared to the current level) and a potential 7.72 L increase in milk produced for each kg of nitrogen surplus (i.e. an increase of 16.45% compared to the current level). These results were based on the best practices observed in Reunion Island and optimized manure management, crop-livestock interactions, and production processes. Our results also showed that frontier efficiency analysis can shed new light on the challenge of developing sustainable intensification in high input tropical dairy systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  6. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  7. Learning the Gestalt rule of collinearity from object motion.

    PubMed

    Prodöhl, Carsten; Würtz, Rolf P; von der Malsburg, Christoph

    2003-08-01

    The Gestalt principle of collinearity (and curvilinearity) is widely regarded as being mediated by the long-range connection structure in primary visual cortex. We review the neurophysiological and psychophysical literature to argue that these connections are developed from visual experience after birth, relying on coherent object motion. We then present a neural network model that learns these connections in an unsupervised Hebbian fashion with input from real camera sequences. The model uses spatiotemporal retinal filtering, which is very sensitive to changes in the visual input. We show that it is crucial for successful learning to use the correlation of the transient responses instead of the sustained ones. As a consequence, learning works best with video sequences of moving objects. The model addresses a special case of the fundamental question of what represents the necessary a priori knowledge the brain is equipped with at birth so that the self-organized process of structuring by experience can be successful.

  8. Parametric embedding for class visualization.

    PubMed

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  9. Emerging health risks associated with modern agriculture practices: a comprehensive study in India.

    PubMed

    Sarkar, Atanu; Aronson, Kristan J; Patil, Shantagouda; Hugar, Lingappa B; vanLoon, Gary W

    2012-05-01

    In order to enhance food production, India has adopted modern agriculture practices and achieved noteworthy success. This achievement was essentially the result of a paradigm shift in agriculture that included high inputs of agrochemicals, water, and widespread practice of monoculture, as well as bureaucratic changes that promoted these changes. There are very few comprehensive analyses of potential adverse health outcomes that may be related to these changes. The objective of this study is to identify health risks associated with modern agricultural practices in the southern Indian state of Karnataka. This study aims to compare high-input and low-input agricultural practices and the consequences for health of people in these communities. The fieldwork was conducted from May to August, 2009 and included a survey carried out in six villages. Data were collected by in-depth personal interviews among 240 households and key informants, field observations, laboratory analyses, and data from secondary sources. The study identified four major visible impacts: occupational hazards, vector borne diseases, changing nutritional status, and inequity in development. In the high-input area, mechanization has resulted in more occurrences of serious accidents and injuries. Ecological changes due to rice cultivation in this area have further augmented mosquito breeding, and there has been a surge in the incidence of Japanese encephalitis and malaria. The traditional coarse cereals (complex carbohydrates, high protein) have been replaced by mill-polished rice (simple carbohydrate, low protein). The prevalence of overweight (BMI>25) has emerged as a new public health challenge, and this is most evident in large-landholding households, especially in the high-input agriculture areas. In all agro-ecological areas, it was observed that women faced a greater risk of both extremes of under-nutrition and being overweight. Output-driven and market-oriented modern agricultural practices have changed the ecology and disease pattern in this area in India, and our survey indicated significant health effects associated with these changes. There is a need for more extensive epidemiological studies in order to know the full impact on diseases and to understand the complex causal relationships. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  11. Life Comparative Analysis of Energy Consumption and CO2 Emissions of Different Building Structural Frame Types

    PubMed Central

    Kim, Sangyong; Moon, Joon-Ho; Shin, Yoonseok; Kim, Gwang-Hee; Seo, Deok-Seok

    2013-01-01

    The objective of this research is to quantitatively measure and compare the environmental load and construction cost of different structural frame types. Construction cost also accounts for the costs of CO2 emissions of input materials. The choice of structural frame type is a major consideration in construction, as this element represents about 33% of total building construction costs. In this research, four constructed buildings were analyzed, with these having either reinforced concrete (RC) or steel (S) structures. An input-output framework analysis was used to measure energy consumption and CO2 emissions of input materials for each structural frame type. In addition, the CO2 emissions cost was measured using the trading price of CO2 emissions on the International Commodity Exchange. This research revealed that both energy consumption and CO2 emissions were, on average, 26% lower with the RC structure than with the S structure, and the construction costs (including the CO2 emissions cost) of the RC structure were about 9.8% lower, compared to the S structure. This research provides insights through which the construction industry will be able to respond to the carbon market, which is expected to continue to grow in the future. PMID:24227998

  12. Ankylosing Spondylitis and Posture Control: The Role of Visual Input

    PubMed Central

    De Nunzio, Alessandro Marco; Iervolino, Salvatore; Zincarelli, Carmela; Di Gioia, Luisa; Rengo, Giuseppe; Multari, Vincenzo; Peluso, Rosario; Di Minno, Matteo Nicola Dario; Pappone, Nicola

    2015-01-01

    Objectives. To assess the motor control during quiet stance in patients with established ankylosing spondylitis (AS) and to evaluate the effect of visual input on the maintenance of a quiet posture. Methods. 12 male AS patients (mean age 50.1 ± 13.2 years) and 12 matched healthy subjects performed 2 sessions of 3 trials in quiet stance, with eyes open (EO) and with eyes closed (EC) on a baropodometric platform. The oscillation of the centre of feet pressure (CoP) was acquired. Indices of stability and balance control were assessed by the sway path (SP) of the CoP, the frequency bandwidth (FB1) that includes the 80% of the area under the amplitude spectrum, the mean amplitude of the peaks (MP) of the sway density curve (SDC), and the mean distance (MD) between 2 peaks of the SDC. Results. In severe AS patients, the MD between two peaks of the SDC and the SP of the center of feet pressure were significantly higher than controls during both EO and EC conditions. The MP was significantly reduced just on EC. Conclusions. Ankylosing spondylitis exerts negative effect on postural stability, not compensable by visual inputs. Our findings may be useful in the rehabilitative management of the increased risk of falling in AS. PMID:25821831

  13. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  14. Magnetic current sensor

    NASA Technical Reports Server (NTRS)

    Black, Jr., William C. (Inventor); Hermann, Theodore M. (Inventor)

    1998-01-01

    A current determiner having an output at which representations of input currents are provided having an input conductor for the input current and a current sensor supported on a substrate electrically isolated from one another but with the sensor positioned in the magnetic fields arising about the input conductor due to any input currents. The sensor extends along the substrate in a direction primarily perpendicular to the extent of the input conductor and is formed of at least a pair of thin-film ferromagnetic layers separated by a non-magnetic conductive layer. The sensor can be electrically connected to a electronic circuitry formed in the substrate including a nonlinearity adaptation circuit to provide representations of the input currents of increased accuracy despite nonlinearities in the current sensor, and can include further current sensors in bridge circuits.

  15. Inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  16. Current Source Logic Gate

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)

    2017-01-01

    A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.

  17. Consideration of Optimal Input on Semi-Active Shock Control System

    NASA Astrophysics Data System (ADS)

    Kawashima, Takeshi

    In press working, unidirectional transmission of mechanical energy is expected in order to maximize the life of the dies. To realize this transmission, the author has developed a shock control system based on the sliding mode control technique. The controller makes a collision-receiving object effectively deform plastically by adjusting the force of the actuator inserted between the colliding objects, while the deformation of the colliding object is held at the necessity minimum. However, the actuator has to generate a large force corresponding to the impulsive force. Therefore, development of such an actuator is a formidable challenge. The author has proposed a semi-active shock control system in which the impulsive force is adjusted by a brake mechanism, although the system exhibits inferior performance. Thus, the author has also designed an actuator using a friction device for semi-active shock control, and proposed an active seatbelt system as an application. The effectiveness has been confirmed by a numerical simulation and model experiment. In this study, the optimal deformation change of the colliding object is theoretically examined in the case that the collision-receiving object has perfect plasticity and the colliding object has perfect elasticity. As a result, the optimal input condition is obtained so that the ratio of the maximum deformation of the collision-receiving object to the maximum deformation of the colliding object becomes the maximum. Additionally, the energy balance is examined.

  18. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  19. Learning viewpoint invariant object representations using a temporal coherence principle.

    PubMed

    Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter

    2005-07-01

    Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.

  20. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  1. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  2. Exhaust system for use with a turbine and method of assembling same

    DOEpatents

    Dalsania, Prakash Bavanjibhai; Sadhu, Antanu

    2015-08-18

    An exhaust system for use with a steam turbine is provided. An exhaust hood includes an input and an output, the input receiving fluid from the steam turbine. The exhaust hood includes a first side wall that extends between the input and the output. The first side wall includes an aperture. An ejector is coupled to the exhaust hood. The ejector includes inlets and an outlet. At least one of the inlets receives fluid from the exhaust hood via the aperture.

  3. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    PubMed Central

    Tal, Zohar; Geva, Ran; Amedi, Amir

    2016-01-01

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114

  4. Assessment of input-output properties and control of neuroprosthetic hand grasp.

    PubMed

    Hines, A E; Owens, N E; Crago, P E

    1992-06-01

    Three tests have been developed to evaluate rapidly and quantitatively the input-output properties and patient control of neuroprosthetic hand grasp. Each test utilizes a visual pursuit tracking task during which the subject controls the grasp force and grasp opening (position) of the hand. The first test characterizes the static input-output properties of the hand grasp, where the input is a slowly changing patient generated command signal and the outputs are grasp force and grasp opening. Nonlinearities and inappropriate slopes have been documented in these relationships, and in some instances the need for system returning has been indicated. For each subject larger grasp forces were produced when grasping larger objects, and for some subjects the shapes of the relationships also varied with object size. The second test quantifies the ability of the subject to control the hand grasp outputs while tracking steps and ramps. Neuroprosthesis users had rms errors two to three times larger when tracking steps versus ramps, and had rms errors four to five times larger than normals when tracking ramps. The third test provides an estimate of the frequency response of the hand grasp system dynamics, from input and output data collected during a random tracking task. Transfer functions were estimated by spectral analysis after removal of the static input-output nonlinearities measured in the first test. The dynamics had low-pass filter characteristics with 3 dB cutoff frequencies from 1.0 to 1.4 Hz. The tests developed in this study provide a rapid evaluation of both the system and the user. They provide information to 1) help interpret subject performance of functional tasks, 2) evaluate the efficacy of system features such as closed-loop control, and 3) screen the neuroprosthesis to indicate the need for retuning.

  5. Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation

    NASA Astrophysics Data System (ADS)

    Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.

    2014-12-01

    Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated Soil Fertility Management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognised within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE and targeted application of limited agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micro-nutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g., the application of SMNs where these are limiting), for others, more complex interactions with fertilizer AE can be identified (e.g., water harvesting under varying rainfall conditions). At farm scale, adjusting fertilizer applications within-farm soil fertility gradients has the potential to increase AE compared with blanket recommendations, in particular where fertility gradients are strong. In the final section, "local adaption" is discussed in relation to scale issues and decision support tools are evaluated as a means to create a better understanding of complexity at farm level and to communicate best scenarios for allocating agro-inputs and management practices within heterogeneous farming environments.

  6. Integrated soil fertility management in sub-Saharan Africa: unravelling local adaptation

    NASA Astrophysics Data System (ADS)

    Vanlauwe, B.; Descheemaeker, K.; Giller, K. E.; Huising, J.; Merckx, R.; Nziguheba, G.; Wendt, J.; Zingore, S.

    2015-06-01

    Intensification of smallholder agriculture in sub-Saharan Africa is necessary to address rural poverty and natural resource degradation. Integrated soil fertility management (ISFM) is a means to enhance crop productivity while maximizing the agronomic efficiency (AE) of applied inputs, and can thus contribute to sustainable intensification. ISFM consists of a set of best practices, preferably used in combination, including the use of appropriate germplasm, the appropriate use of fertilizer and of organic resources, and good agronomic practices. The large variability in soil fertility conditions within smallholder farms is also recognized within ISFM, including soils with constraints beyond those addressed by fertilizer and organic inputs. The variable biophysical environments that characterize smallholder farming systems have profound effects on crop productivity and AE, and targeted application of agro-inputs and management practices is necessary to enhance AE. Further, management decisions depend on the farmer's resource endowments and production objectives. In this paper we discuss the "local adaptation" component of ISFM and how this can be conceptualized within an ISFM framework, backstopped by analysis of AE at plot and farm level. At plot level, a set of four constraints to maximum AE is discussed in relation to "local adaptation": soil acidity, secondary nutrient and micronutrient (SMN) deficiencies, physical constraints, and drought stress. In each of these cases, examples are presented whereby amendments and/or practices addressing these have a significantly positive impact on fertilizer AE, including mechanistic principles underlying these effects. While the impact of such amendments and/or practices is easily understood for some practices (e.g. the application of SMNs where these are limiting), for others, more complex processes influence AE (e.g. water harvesting under varying rainfall conditions). At farm scale, adjusting fertilizer applications to within-farm soil fertility gradients has the potential to increase AE compared with blanket recommendations, in particular where fertility gradients are strong. In the final section, "local adaption" is discussed in relation to scale issues and decision support tools are evaluated as a means to create a better understanding of complexity at farm level and to communicate appropriate scenarios for allocating agro-inputs and management practices within heterogeneous farming environments.

  7. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  8. Overview and Results of ISS Space Medicine Operations Team (SMOT) Activities

    NASA Technical Reports Server (NTRS)

    Johnson, H. Magee; Sargsyan, Ashot E.; Armstrong, Cheryl; McDonald, P. Vernon; Duncan, James M.; Bogomolov, V. V.

    2007-01-01

    The Space Medicine Operations Team (SMOT) was created to integrate International Space Station (ISS) Medical Operations, promote awareness of all Partners, provide emergency response capability and management, provide operational input from all Partners for medically relevant concerns, and provide a source of medical input to ISS Mission Management. The viewgraph presentation provides an overview of educational objectives, purpose, operations, products, statistics, and its use in off-nominal situations.

  9. Scanned Laser Illuminator/Receiver

    DTIC Science & Technology

    1976-11-01

    0.21/cm | | Optical Power 20 Watts i input I Optical Power 100 Watts | output ! Input Power 10 kW The oscillator ...8217.rectional due to the oscillating mirror. Again, con- siderable optical magnification is provided between object space and the scan mirror to minimize the...for a 100W (14) R. A. McClatchey, et al, " Optical Properties of the Atmosphere (Third Edition)," Air Force Cambridge Research Laboratories, Tech

  10. Technique for compressing light intensity ranges utilizing a specifically designed liquid crystal notch filter

    DOEpatents

    Rushford, Michael C.

    1988-01-01

    A pin hole camera assembly for use in viewing an object having a relatively large light intensity range, for example a crucible containing molten metal in an atomic vapor laser isotope separation (AVLIS) system is disclosed herein. The assembly includes means for optically compressing the light intensity range appearing at its input sufficient to make it receivable and decipherable by a standard video camera. To accomplish this, the assembly utilizes the combination of interference filter and a liquid crystal notch filter. The latter which preferably includes a cholesteric liquid crystal arrangement is configured to pass light at all wavelengths, except a relatively narrow wavelength band which defines the filter's notch, and includes means for causing the notch to vary to at least a limited extent with the intensity of light at its light incidence surface.

  11. Accuracy enhancement for forecasting water levels of reservoirs and river streams using a multiple-input-pattern fuzzification approach.

    PubMed

    Valizadeh, Nariman; El-Shafie, Ahmed; Mirzaei, Majid; Galavi, Hadi; Mukhlisin, Muhammad; Jaafar, Othman

    2014-01-01

    Water level forecasting is an essential topic in water management affecting reservoir operations and decision making. Recently, modern methods utilizing artificial intelligence, fuzzy logic, and combinations of these techniques have been used in hydrological applications because of their considerable ability to map an input-output pattern without requiring prior knowledge of the criteria influencing the forecasting procedure. The artificial neurofuzzy interface system (ANFIS) is one of the most accurate models used in water resource management. Because the membership functions (MFs) possess the characteristics of smoothness and mathematical components, each set of input data is able to yield the best result using a certain type of MF in the ANFIS models. The objective of this study is to define the different ANFIS model by applying different types of MFs for each type of input to forecast the water level in two case studies, the Klang Gates Dam and Rantau Panjang station on the Johor river in Malaysia, to compare the traditional ANFIS model with the new introduced one in two different situations, reservoir and stream, showing the new approach outweigh rather than the traditional one in both case studies. This objective is accomplished by evaluating the model fitness and performance in daily forecasting.

  12. Accuracy Enhancement for Forecasting Water Levels of Reservoirs and River Streams Using a Multiple-Input-Pattern Fuzzification Approach

    PubMed Central

    Mirzaei, Majid; Jaafar, Othman

    2014-01-01

    Water level forecasting is an essential topic in water management affecting reservoir operations and decision making. Recently, modern methods utilizing artificial intelligence, fuzzy logic, and combinations of these techniques have been used in hydrological applications because of their considerable ability to map an input-output pattern without requiring prior knowledge of the criteria influencing the forecasting procedure. The artificial neurofuzzy interface system (ANFIS) is one of the most accurate models used in water resource management. Because the membership functions (MFs) possess the characteristics of smoothness and mathematical components, each set of input data is able to yield the best result using a certain type of MF in the ANFIS models. The objective of this study is to define the different ANFIS model by applying different types of MFs for each type of input to forecast the water level in two case studies, the Klang Gates Dam and Rantau Panjang station on the Johor river in Malaysia, to compare the traditional ANFIS model with the new introduced one in two different situations, reservoir and stream, showing the new approach outweigh rather than the traditional one in both case studies. This objective is accomplished by evaluating the model fitness and performance in daily forecasting. PMID:24790567

  13. 'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2017-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  14. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  15. Representing uncertainty in objective functions: extension to include the influence of serial correlation

    NASA Astrophysics Data System (ADS)

    Croke, B. F.

    2008-12-01

    The role of performance indicators is to give an accurate indication of the fit between a model and the system being modelled. As all measurements have an associated uncertainty (determining the significance that should be given to the measurement), performance indicators should take into account uncertainties in the observed quantities being modelled as well as in the model predictions (due to uncertainties in inputs, model parameters and model structure). In the presence of significant uncertainty in observed and modelled output of a system, failure to adequately account for variations in the uncertainties means that the objective function only gives a measure of how well the model fits the observations, not how well the model fits the system being modelled. Since in most cases, the interest lies in fitting the system response, it is vital that the objective function(s) be designed to account for these uncertainties. Most objective functions (e.g. those based on the sum of squared residuals) assume homoscedastic uncertainties. If model contribution to the variations in residuals can be ignored, then transformations (e.g. Box-Cox) can be used to remove (or at least significantly reduce) heteroscedasticity. An alternative which is more generally applicable is to explicitly represent the uncertainties in the observed and modelled values in the objective function. Previous work on this topic addressed the modifications to standard objective functions (Nash-Sutcliffe efficiency, RMSE, chi- squared, coefficient of determination) using the optimal weighted averaging approach. This paper extends this previous work; addressing the issue of serial correlation. A form for an objective function that includes serial correlation will be presented, and the impact on model fit discussed.

  16. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.

    PubMed

    Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S

    2017-05-10

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.

  17. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding

    PubMed Central

    2017-01-01

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. PMID:28416597

  18. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  19. Redesigning U.S. currency

    NASA Astrophysics Data System (ADS)

    Ferguson, Thomas A.; Church, Sara E.

    1996-03-01

    The first new design of United States currency in over 60 years will soon be issued. Its issuance will be the culmination of a 6-year effort to make U.S. currency more secure against widely available advanced reprographic technology. The cooperative effort was directed by the Advanced Counterfeit Deterrence (ACD) Steering Committee, with executive representatives from the Federal Reserve System (FRS), U.S. Secret Service (USSS), Bureau of Engraving and Printing (BEP) and Treasury Department. A task force of technical experts from each agency carried out the necessary evaluations. The overall strategy to determine the new design and new features applied a comprehensive, synergistic approach to target each type of currency user and each type of counterfeiting. To maximize objectivity yet expedite final selection, deterrent and detection technologies were evaluated through several parallel channels. These efforts included an open request for feature samples through the Commerce Business Daily, in-house testing of each feature, independent evaluation by the National Research Council, in-house design development and survey of world currencies. Recommendations were submitted by the Steering Committee to the Treasury Secretary for concept approval, announced in July 1994. Beginning in 1996, new designs will be issued by denomination approximately one per year, starting with the $100 bill. Future new design efforts will include input from the recently founded Securities Technology Institute (STI) at Johns Hopkins Applied Physics Laboratory. Input will include evaluation of existing features, development of new techniques and adversarial analysis.

  20. Assessing the required additional organic inputs to soils to reach the 4 per 1000 objective at the global scale: a RothC project

    NASA Astrophysics Data System (ADS)

    Lutfalla, Suzanne; Skalsky, Rastislav; Martin, Manuel; Balkovic, Juraj; Havlik, Petr; Soussana, Jean-François

    2017-04-01

    The 4 per 1000 Initiative underlines the role of soil organic matter in addressing the three-fold challenge of food security, adaptation of the land sector to climate change, and mitigation of human-induced GHG emissions. It sets an ambitious global target of a 0.4% (4/1000) annual increase in top soil organic carbon (SOC) stock. The present collaborative project between the 4 per 1000 research program, INRA and IIASA aims at providing a first global assessment of the translation of this soil organic carbon sequestration target into the equivalent organic matter inputs target. Indeed, soil organic carbon builds up in the soil through different processes leading to an increased input of carbon to the system (by increasing returns to the soil for instance) or a decreased output of carbon from the system (mainly by biodegradation and mineralization processes). Here we answer the question of how much extra organic matter must be added to agricultural soils every year (in otherwise unchanged climatic conditions) in order to guarantee a 0.4% yearly increase of total soil organic carbon stocks (40cm soil depth is considered). We use the RothC model of soil organic matter turnover on a spatial grid over 10 years to model two situations for croplands: a first situation where soil organic carbon remains constant (system at equilibrium) and a second situation where soil organic matter increases by 0.4% every year. The model accounts for the effects of soil type, temperature, moisture content and plant cover on the turnover process, it is run on a monthly time step, and it can simulate the needed organic input to sustain a certain SOC stock (or evolution of SOC stock). These two SOC conditions lead to two average yearly plant inputs over 10 years. The difference between the two simulated inputs represent the additional yearly input needed to reach the 4 per 1000 objective (input_eq for inputs needed for SOC to remain constant; input_4/1000 for inputs needed for SOC to reach the 4 per 1000 target). A spatial representation of this difference shows the distribution of the required returns to the soil. This first tool will provide the basis for the next steps: choosing and implementing practices to obtain the required additional input. Results will be presented from simulations at the regional scale (country: Slovakia) and at the global scale (0,5° grid resolution). Soil input data comes from the HWSD, climatic input data comes from AgMERRA climate dataset averaged of a 30 years period (1980-2010). They show that, at the global scale, given some data corrections which will be presented and discussed, the 4 per 1000 increase in top soil organic carbon can be reached with a median additional input of +0.89 tC/ha/year for cropland soils.

  1. Detecting and discriminating novel objects: The impact of perirhinal cortex disconnection on hippocampal activity patterns

    PubMed Central

    Amin, Eman; Olarte‐Sánchez, Cristian M.; Aggleton, John P.

    2016-01-01

    ABSTRACT Perirhinal cortex provides object‐based information and novelty/familiarity information for the hippocampus. The necessity of these inputs was tested by comparing hippocampal c‐fos expression in rats with or without perirhinal lesions. These rats either discriminated novel from familiar objects (Novel‐Familiar) or explored pairs of novel objects (Novel‐Novel). Despite impairing Novel‐Familiar discriminations, the perirhinal lesions did not affect novelty detection, as measured by overall object exploration levels (Novel‐Novel condition). The perirhinal lesions also largely spared a characteristic network of linked c‐fos expression associated with novel stimuli (entorhinal cortex→CA3→distal CA1→proximal subiculum). The findings show: I) that perirhinal lesions preserve behavioral sensitivity to novelty, whilst still impairing the spontaneous ability to discriminate novel from familiar objects, II) that the distinctive patterns of hippocampal c‐fos activity promoted by novel stimuli do not require perirhinal inputs, III) that entorhinal Fos counts (layers II and III) increase for novelty discriminations, IV) that hippocampal c‐fos networks reflect proximal‐distal connectivity differences, and V) that discriminating novelty creates different pathway interactions from merely detecting novelty, pointing to top‐down effects that help guide object selection. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27398938

  2. Biologically-inspired robust and adaptive multi-sensor fusion and active control

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.

  3. Alternative Packaging for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.

  4. A neural network model of semantic memory linking feature-based object representation and words.

    PubMed

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  5. Holographic Associative Memory Employing Phase Conjugation

    NASA Astrophysics Data System (ADS)

    Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.

    1986-12-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  6. Human-telerobot interactions - Information, control, and mental models

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Gillan, Douglas J.

    1987-01-01

    A part of the NASA's Space Station will be a teleoperated robot (telerobot) with arms for grasping and manipulation, feet for holding onto objects, and television cameras for visual feedback. The objective of the work described in this paper is to develop the requirements and specifications for the user-telerobot interface and to determine through research and testing that the interface results in efficient system operation. The focus of the development of the user-telerobot interface is on the information required by the user, the user inputs, and the design of the control workstation. Closely related to both the information required by the user and the user's control of the telerobot is the user's mental model of the relationship between the control inputs and the telerobot's actions.

  7. Biometric identification

    NASA Astrophysics Data System (ADS)

    Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.

    2018-05-01

    Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.

  8. Buried Object Classification using a Sediment Volume Imaging SAS and Electromagnetic Gradiometer

    DTIC Science & Technology

    2006-09-01

    field data with simulated RTG data using AST’s in-house magnetic modeling tool EMAGINE . Given a set of input dipole moments, or pa- rameters to...approximate a moment by assuming the object is a prolate ellipsoid shell, EMAGINE uses Green’s func- tion formulations to generate three-component

  9. Analyzing Preschoolers' Overgeneralizations of Object Labeling in the Process of Mother-Tongue Acquisition in Turkey

    ERIC Educational Resources Information Center

    Kabadayi, Abdulkadir

    2006-01-01

    Language, as is known, is acquired under certain conditions: rapid and sequential brain maturation and cognitive development, the need to exchange information and to control others' actions, and an exposure to appropriate speech input. This research aims at analyzing preschoolers' overgeneralizations of the object labeling process in different…

  10. Improving Educational Objectives of the Industrial and Management Systems Engineering Programme at Kuwait University

    ERIC Educational Resources Information Center

    Aldowaisan, Tariq; Allahverdi, Ali

    2016-01-01

    This paper describes the process of developing programme educational objectives (PEOs) for the Industrial and Management Systems Engineering programme at Kuwait University, and the process of deployment of these PEOs. Input of the four constituents of the programme, faculty, students, alumni, and employers, is incorporated in the development and…

  11. Barriers and Opportunities Related to Whole Grain Foods in Minnesota School Foodservice

    ERIC Educational Resources Information Center

    Hesse, David; Braun, Curtis; Dostal, Allison; Jeffery, Robert; Marquart, Len

    2009-01-01

    Purpose/Objectives: The purpose of this research was to identify barriers and opportunities associated with the introduction of whole grain foods into school cafeterias. The primary objective was to elicit input from school foodservice personnel (SFP) regarding their experiences in ordering, purchasing, preparing, and serving whole grain foods in…

  12. Grasp Representations Depend on Knowledge and Attention

    ERIC Educational Resources Information Center

    Chua, Kao-Wei; Bub, Daniel N.; Masson, Michael E. J.; Gauthier, Isabel

    2018-01-01

    Seeing pictures of objects activates the motor cortex and can have an influence on subsequent grasping actions. However, the exact nature of the motor representations evoked by these pictures is unclear. For example, action plans engaged by pictures could be most affected by direct visual input and computed online based on object shape.…

  13. Fuzzy logic control and optimization system

    DOEpatents

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  14. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  15. Method and apparatus for providing high bandwidth, low noise mechanical I/O for computer systems

    DOEpatents

    Rosenberg, Louis B.

    1998-01-01

    A method and apparatus for providing high bandwidth and low noise mechanical input and output for computer systems. A gimbal mechanism provides two revolute degrees of freedom to an object about two axes of rotation. A linear axis member is coupled to the gimbal mechanism at the intersection of the two axes of rotation. The linear axis member is capable of being translated along a third axis to provide a third degree of freedom. The user object is coupled to the linear axis member and is thus translatable along the third axis so that the object can be moved along all three degrees of freedom. Transducers associated with the provided degrees of freedom include sensors and actuators and provide an electromechanical interface between the object and a digital processing system. Capstan drive mechanisms transmit forces between the transducers and the object. The linear axis member can also be rotated about its lengthwise axis to provide a fourth degree of freedom, and, optionally, a floating gimbal mechanism is coupled to the linear axis member to provide fifth and sixth degrees of freedom to an object. Transducer sensors are associated with the fourth, fifth, and sixth degrees of freedom. The interface is well suited for simulations of medical procedures and simulations in which an object such as a stylus or a joystick is moved and manipulated by the user.

  16. Multiple degree-of-freedom mechanical interface to a computer system

    DOEpatents

    Rosenberg, Louis B.

    2001-01-01

    A method and apparatus for providing high bandwidth and low noise mechanical input and output for computer systems. A gimbal mechanism provides two revolute degrees of freedom to an object about two axes of rotation. A linear axis member is coupled to the gimbal mechanism at the intersection of the two axes of rotation. The linear axis member is capable of being translated along a third axis to provide a third degree of freedom. The user object is coupled to the linear axis member and is thus translatable along the third axis so that the object can be moved along all three degrees of freedom. Transducers associated with the provided degrees of freedom include sensors and actuators and provide an electromechanical interface between the object and a digital processing system. Capstan drive mechanisms transmit forces between the transducers and the object. The linear axis member can also be rotated about its lengthwise axis to provide a fourth degree of freedom, and, optionally, a floating gimbal mechanism is coupled to the linear axis member to provide fifth and sixth degrees of freedom to an object. Transducer sensors are associated with the fourth, fifth, and sixth degrees of freedom. The interface is well suited for simulations of medical procedures and simulations in which an object such as a stylus or a joystick is moved and manipulated by the user.

  17. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  18. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  19. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  20. Hybrid powertrain system including smooth shifting automated transmission

    DOEpatents

    Beaty, Kevin D.; Nellums, Richard A.

    2006-10-24

    A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.

  1. Statistical Learning Is Constrained to Less Abstract Patterns in Complex Sensory Input (but not the Least)

    PubMed Central

    Emberson, Lauren L.; Rubinstein, Dani

    2016-01-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779

  2. Frequency-Agile LIDAR Receiver for Chemical and Biological Agent Sensing

    DTIC Science & Technology

    2010-06-01

    transimpedance preamplifier architecture was optimized around the selected IR detector diode – Input-referenced noise density of 0.8 nV/ Hz0.5  A portion of...objectives: • Reduce baseline (background) photon flux on detector : Tunable Fabry-Perot etalon in optical train • Reduce input-referenced amplifier noise ...custom amplifier • Reduce detector dark current: High impedance detector  Performance Metrics: – Noise equivalent power of receiver system (NEP

  3. Development and Application of a Stepwise Assessment Process for Rational Redesign of Sequential Skills-Based Courses.

    PubMed

    Gallimore, Casey E; Porter, Andrea L; Barnett, Susanne G

    2016-10-25

    Objective. To develop and apply a stepwise process to assess achievement of course learning objectives related to advanced pharmacy practice experiences (APPEs) preparedness and inform redesign of sequential skills-based courses. Design. Four steps comprised the assessment and redesign process: (1) identify skills critical for APPE preparedness; (2) utilize focus groups and course evaluations to determine student competence in skill performance; (3) apply course mapping to identify course deficits contributing to suboptimal skill performance; and (4) initiate course redesign to target exposed deficits. Assessment. Focus group participants perceived students were least prepared for skills within the Accreditation Council for Pharmacy Education's pre-APPE core domains of Identification and Assessment of Drug-related Problems and General Communication Abilities. Course mapping identified gaps in instruction, performance, and assessment of skills within aforementioned domains. Conclusions. A stepwise process that identified strengths and weaknesses of a course, was used to facilitate structured course redesign. Strengths of the process included input and corroboration from both preceptors and students. Limitations included feedback from a small number of pharmacy preceptors and increased workload on course coordinators.

  4. Glacier Surface Lowering and Stagnation in the Manaslu Region of Nepal

    NASA Astrophysics Data System (ADS)

    Robson, B. A.; Nuth, C.; Nielsen, P. R.; Hendrickx, M.; Dahl, S. O.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  5. A pay-for-performance system for civil service doctors: the Indonesian experiment.

    PubMed

    Chernichovsky, D; Bayulken, C

    1995-07-01

    In 1980 the Government of Indonesia proposed the introduction of a pay-for-performance system, the Functional Position System (FPS), for certain occupational categories of civil servants to provide a career development path and stimulate productivity (Government of Indonesia. Government Ordinance No. 3, 1980 Concerning Appointment to Civil Service Rank. Jakarta, 1980). The FPS, a bold pay concept in the civil service, links pay to skills and performance. In 1987, instructions were issued for doctors to be included in the system (Government of Indonesia, Credit Scores for Doctors. Circular Issued by the Ministry of Health and the Agency for Administration of the Civil Service No. 614/MENKES/E/VIII/1987 and No. 16/SE/1987). In this paper we evaluate how well the system-which in principle could be applicable to both developed and developing economies--can meet its stated objectives for Indonesian doctors working in the community, and for Indonesian health policy objectives as stated in the country's last five-year development plan "Repelita V" (Government of Indonesia. The Fifth Five-year Development Plan (Repelita V) 1989-1994. Jakarta, Indonesia, 1989). The FPS is particularly innovative in the Indonesian environment where wages are low and comparatively uniform, reflecting a philosophy of 'shared poverty', and vary primarily by seniority. The FPS has, however, several conceptual and practical shortcomings. The design of the reward system disregards effort or time inputs, as well as other inputs needed per unit of reward. Consequently, the FPS can not be used as an effective incentive system promoting professional excellence and health policy objectives. Practically, the system hardly provides an effective alternative for career development among community physicians.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. Non-destructive testing of ceramic materials using mid-infrared ultrashort-pulse laser

    NASA Astrophysics Data System (ADS)

    Sun, S. C.; Qi, Hong; An, X. Y.; Ren, Y. T.; Qiao, Y. B.; Ruan, Liming M.

    2018-04-01

    The non-destructive testing (NDT) of ceramic materials using mid-infrared ultrashort-pulse laser is investigated in this study. The discrete ordinate method is applied to solve the transient radiative transfer equation in 2D semitransparent medium and the emerging radiative intensity on boundary serves as input for the inverse analysis. The sequential quadratic programming algorithm is employed as the inverse technique to optimize objective function, in which the gradient of objective function with respect to reconstruction parameters is calculated using the adjoint model. Two reticulated porous ceramics including partially stabilized zirconia and oxide-bonded silicon carbide are tested. The retrieval results show that the main characteristics of defects such as optical properties, geometric shapes and positions can be accurately reconstructed by the present model. The proposed technique is effective and robust in NDT of ceramics even with measurement errors.

  7. Highly Reconfigurable Beamformer Stimulus Generator

    NASA Astrophysics Data System (ADS)

    Vaviļina, E.; Gaigals, G.

    2018-02-01

    The present paper proposes a highly reconfigurable beamformer stimulus generator of radar antenna array, which includes three main blocks: settings of antenna array, settings of objects (signal sources) and a beamforming simulator. Following from the configuration of antenna array and object settings, different stimulus can be generated as the input signal for a beamformer. This stimulus generator is developed under a greater concept with two utterly independent paths where one is the stimulus generator and the other is the hardware beamformer. Both paths can be complemented in final and in intermediate steps as well to check and improve system performance. This way the technology development process is promoted by making each of the future hardware steps more substantive. Stimulus generator configuration capabilities and test results are presented proving the application of the stimulus generator for FPGA based beamforming unit development and tuning as an alternative to an actual antenna system.

  8. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  9. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  10. The 11.2 μm emission of PAHs in astrophysical objects

    NASA Astrophysics Data System (ADS)

    Candian, A.; Sarre, P. J.

    2015-04-01

    The 11.2-μm emission band belongs to the family of the `unidentified' infrared emission bands seen in many astronomical environments. In this work, we present a theoretical interpretation of the band characteristics and profile variation for a number of astrophysical sources in which the carriers are subject to a range of physical conditions. The results of Density Functional Theory calculations for the solo out-of-plane vibrational bending modes of large polycyclic aromatic hydrocarbon (PAH) molecules are used as input for a detailed emission model which includes the temperature and mass dependence of PAH band wavelength, and a PAH mass distribution that varies with object. Comparison of the model with astronomical spectra indicates that the 11.2-μm band asymmetry and profile variation can be explained principally in terms of the mass distribution of neutral PAHs with a small contribution from anharmonic effects.

  11. Multi-attribute subjective evaluations of manual tracking tasks vs. objective performance of the human operator

    NASA Technical Reports Server (NTRS)

    Siapkaras, A.

    1977-01-01

    A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.

  12. Dynamic light scattering homodyne probe

    NASA Technical Reports Server (NTRS)

    Meyer, William V. (Inventor); Cannell, David S. (Inventor); Smart, Anthony E. (Inventor)

    2002-01-01

    An optical probe for analyzing a sample illuminated by a laser includes an input optical fiber operably connectable to the laser where the input optical fiber has an entrance end and an exit end. The probe also includes a first beam splitter where the first beam splitter is adapted to transmit an alignment portion of a light beam from the input fiber exit end and to reflect a homodyning portion of the light beam from the input fiber. The probe also includes a lens between the input fiber exit end and the first beam splitter and a first and a second output optical fiber, each having an entrance end and an exit end, each exit end being operably connectable to respective optical detectors. The probe also includes a second beam splitter which is adapted to reflect at least a portion of the reflected homodyning portion into the output fiber entrance ends and to transmit light from the laser scattered by the sample into the entrance ends.

  13. The neural basis of human tool use

    PubMed Central

    Orban, Guy A.; Caruana, Fausto

    2014-01-01

    In this review, we propose that the neural basis for the spontaneous, diversified human tool use is an area devoted to the execution and observation of tool actions, located in the left anterior supramarginal gyrus (aSMG). The aSMG activation elicited by observing tool use is typical of human subjects, as macaques show no similar activation, even after an extensive training to use tools. The execution of tool actions, as well as their observation, requires the convergence upon aSMG of inputs from different parts of the dorsal and ventral visual streams. Non-semantic features of the target object may be provided by the posterior parietal cortex (PPC) for tool-object interaction, paralleling the well-known PPC input to anterior intraparietal (AIP) for hand-object interaction. Semantic information regarding tool identity, and knowledge of the typical manner of handling the tool, could be provided by inferior and middle regions of the temporal lobe. Somatosensory feedback and technical reasoning, as well as motor and intentional constraints also play roles during the planning of tool actions and consequently their signals likewise converge upon aSMG. We further propose that aSMG may have arisen though duplication of monkey AIP and invasion of the duplicate area by afferents from PPC providing distinct signals depending on the kinematics of the manipulative action. This duplication may have occurred when Homo Habilis or Homo Erectus emerged, generating the Oldowan or Acheulean Industrial complexes respectively. Hence tool use may have emerged during hominid evolution between bipedalism and language. We conclude that humans have two parietal systems involved in tool behavior: a biological circuit for grasping objects, including tools, and an artifactual system devoted specifically to tool use. Only the latter allows humans to understand the causal relationship between tool use and obtaining the goal, and is likely to be the basis of all technological developments. PMID:24782809

  14. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  15. Method and apparatus for clockless analog-to-digital conversion and peak detection

    DOEpatents

    DeGeronimo, Gianluigi

    2007-03-06

    An apparatus and method for analog-to-digital conversion and peak detection includes at least one stage, which includes a first switch, second switch, current source or capacitor, and discriminator. The discriminator changes state in response to a current or charge associated with the input signal exceeding a threshold, thereby indicating whether the current or charge associated with the input signal is greater than the threshold. The input signal includes a peak or a charge, and the converter includes a peak or charge detect mode in which a state of the switch is retained in response to a decrease in the current or charge associated with the input signal. The state of the switch represents at least a portion of a value of the peak or of the charge.

  16. High speed high dynamic range high accuracy measurement system

    DOEpatents

    Deibele, Craig E.; Curry, Douglas E.; Dickson, Richard W.; Xie, Zaipeng

    2016-11-29

    A measuring system includes an input that emulates a bandpass filter with no signal reflections. A directional coupler connected to the input passes the filtered input to electrically isolated measuring circuits. Each of the measuring circuits includes an amplifier that amplifies the signal through logarithmic functions. The output of the measuring system is an accurate high dynamic range measurement.

  17. Intensive Input in Language Acquisition.

    ERIC Educational Resources Information Center

    Trimino, Andy; Ferguson, Nancy

    This paper discusses the role of input as one of the universals in second language acquisition theory. Considerations include how language instructors can best organize and present input and when certain kinds of input are more important. A self-administered program evaluation exercise using relevant theoretical and methodological contributions…

  18. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  19. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  20. More than words: Adults learn probabilities over categories and relationships between them.

    PubMed

    Hudson Kam, Carla L

    2009-04-01

    This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.

  1. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  2. Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2011-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.

  3. The cost of a case of subclinical ketosis in Canadian dairy herds

    PubMed Central

    Gohary, Khaled; Overton, Michael W.; Von Massow, Michael; LeBlanc, Stephen J.; Lissemore, Kerry D.; Duffield, Todd F.

    2016-01-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation. PMID:27429460

  4. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  5. The cost of a case of subclinical ketosis in Canadian dairy herds.

    PubMed

    Gohary, Khaled; Overton, Michael W; Von Massow, Michael; LeBlanc, Stephen J; Lissemore, Kerry D; Duffield, Todd F

    2016-07-01

    The objective of this study was to develop a model to estimate the cost of a case of subclinical ketosis (SCK) in Canadian dairy herds. Costs were derived from the default inputs, and included increased clinical disease incidence attributable to SCK, $76; longer time to pregnancy, $57; culling and death in early lactation attributable to SCK, $26; milk production loss, $44. Given these figures, the cost of 1 case of SCK was estimated to be $203. Sensitivity analysis showed that the estimated cost of a case of SCK was most sensitive to the herd-level incidence of SCK and the cost of 1 day open. In conclusion, SCK negatively impacts dairy herds and losses are dependent on the herd-level incidence and factors included in the calculation.

  6. Piezoelectric particle accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, Mark A.; Jongewaard, Erik N.; Haase, Andrew A.

    2017-08-29

    A particle accelerator is provided that includes a piezoelectric accelerator element, where the piezoelectric accelerator element includes a hollow cylindrical shape, and an input transducer, where the input transducer is disposed to provide an input signal to the piezoelectric accelerator element, where the input signal induces a mechanical excitation of the piezoelectric accelerator element, where the mechanical excitation is capable of generating a piezoelectric electric field proximal to an axis of the cylindrical shape, where the piezoelectric accelerator is configured to accelerate a charged particle longitudinally along the axis of the cylindrical shape according to the piezoelectric electric field.

  7. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  8. Spatiotemporal Dissociation of Brain Activity Underlying Subjective Awareness, Objective Performance and Confidence

    PubMed Central

    Li, Qi; Hill, Zachary

    2014-01-01

    Despite intense recent research, the neural correlates of conscious visual perception remain elusive. The most established paradigm for studying brain mechanisms underlying conscious perception is to keep the physical sensory inputs constant and identify brain activities that correlate with the changing content of conscious awareness. However, such a contrast based on conscious content alone would not only reveal brain activities directly contributing to conscious perception, but also include brain activities that precede or follow it. To address this issue, we devised a paradigm whereby we collected, trial-by-trial, measures of objective performance, subjective awareness, and the confidence level of subjective awareness. Using magnetoencephalography recordings in healthy human volunteers, we dissociated brain activities underlying these different cognitive phenomena. Our results provide strong evidence that widely distributed slow cortical potentials (SCPs) correlate with subjective awareness, even after the effects of objective performance and confidence were both removed. The SCP correlate of conscious perception manifests strongly in its waveform, phase, and power. In contrast, objective performance and confidence were both contributed by relatively transient brain activity. These results shed new light on the brain mechanisms of conscious, unconscious, and metacognitive processing. PMID:24647958

  9. Simulation of a 20-ton LiBr/H{sub 2}O absorption cooling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardono, B.; Nelson, R.M.

    The possibility of using solar energy as the main heat input for cooling systems has led to several studies of available cooling technologies that use solar energy. The results show that double-effect absorption cooling systems give relatively high performance. To further study absorption cooling systems, a computer code was developed for a double-effect lithium bromide/water (LiBr/H{sub 2}O) absorption system. To evaluate the performance, two objective functions were developed including the coefficient of performance (COP) and the system cost. Based on the system cost, an optimization to find the minimum cost was performed to determine the nominal heat transfer areas ofmore » each heat exchanger. The nominal values of other system variables, such as the mass flow rates and inlet temperatures of the hot water, cooling water, and chilled water, are specified as commonly used values for commercial machines. The results of the optimization show that there are optimum heat transfer areas. In this study, hot water is used as the main energy input. Using a constant load of 20 tons cooling capacity, the effects of various variables including the heat transfer ares, mass flow rates, and inlet temperatures of hot water, cooling water, and chilled water are presented.« less

  10. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local-global reference frames.

    PubMed

    Knierim, James J; Neunuebel, Joshua P; Deshmukh, Sachin S

    2014-02-05

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between 'where' versus 'what' needs revision. We propose a refinement of this model, which is more complex than the simple spatial-non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.

  11. Observer-based perturbation extremum seeking control with input constraints for direct-contact membrane distillation process

    NASA Astrophysics Data System (ADS)

    Eleiwi, Fadi; Laleg-Kirati, Taous Meriem

    2018-06-01

    An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.

  12. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local–global reference frames

    PubMed Central

    Knierim, James J.; Neunuebel, Joshua P.; Deshmukh, Sachin S.

    2014-01-01

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience. PMID:24366146

  13. Input Manipulation, Enhancement and Processing: Theoretical Views and Empirical Research

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2016-01-01

    Researchers in the field of instructed second language acquisition have been examining the issue of how learners interact with input by conducting research measuring particular kinds of instructional interventions (input-oriented and meaning-based). These interventions include such things as input flood, textual enhancement and processing…

  14. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  15. Summary of PhysPAG Activity

    NASA Astrophysics Data System (ADS)

    Nousek, John A.

    2014-01-01

    The Physics of the Cosmos Program Analysis Group (PhysPAG) is responsible for solicitiing and coordinating community input for the development and execution of NASA's Physics of the Cosmos (PCOS) program. In this session I will report on the activity of the PhysPAG, and solicit community involvement in the process of defining PCOS objectives, planning SMD architecture, and prioritizing PCOS activities. I will also report on the activities of the PhysPAG Executive Committee, which include the chairs of the Science Analysis Groups/ Science Interest Groups which fall under the PhysPAG sphere of interest. Time at the end of the presentation willl be reserved for questions and discussion from the community.

  16. A rotor technology assessment of the advancing blade concept

    NASA Technical Reports Server (NTRS)

    Pleasants, W. A.

    1983-01-01

    A rotor technology assessment of the Advancing Blade Concept (ABC) was conducted in support of a preliminary design study. The analytical methodology modifications and inputs, the correlation, and the results of the assessment are documented. The primary emphasis was on the high-speed forward flight performance of the rotor. The correlation data base included both the wind tunnel and the flight test results. An advanced ABC rotor design was examined; the suitability of the ABC for a particular mission was not considered. The objective of this technology assessment was to provide estimates of the performance potential of an advanced ABC rotor designed for high speed forward flight.

  17. Facilitating Behavior Change With Low-literacy Patient Education Materials

    PubMed Central

    Seligman, Hilary K.; Wallace, Andrea S.; DeWalt, Darren A.; Schillinger, Dean; Arnold, Connie L.; Shilliday, Betsy Bryant; Delgadillo, Adriana; Bengal, Nikki; Davis, Terry C.

    2014-01-01

    Objective To describe a process for developing low-literacy health education materials that increase knowledge and activate patients toward healthier behaviors. Methods We developed a theoretically informed process for developing educational materials. This process included convening a multidisciplinary creative team, soliciting stakeholder input, identifying key concepts to be communicated, mapping concepts to a behavioral theory, creating a supporting behavioral intervention, designing and refining materials, and assessing efficacy. Results We describe the use of this process to develop a diabetes self-management guide. Conclusions Developing low-literacy health education materials that will activate patients toward healthier behaviors requires attention to factors beyond reading level. PMID:17931139

  18. Emissions-critical charge cooling using an organic rankine cycle

    DOEpatents

    Ernst, Timothy C.; Nelson, Christopher R.

    2014-07-15

    The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.

  19. Analyzing public inputs to multiple objective decisions on national forests using conjoint analysis

    Treesearch

    Donald F. Dennis

    1998-01-01

    Faced with multiple objectives, national forest managers and planners need a means to solicit and analyze public preferences and values. A conjoint ranking survey was designed to solicit public preferences for various levels of timber harvesting, wildlife habitats, hiking trails, snowmobile use, and off-road-vehicle (ORV) access on the Green Mountain National Forest....

  20. Why Is Rapid Automatized Naming Related to Reading?

    ERIC Educational Resources Information Center

    Georgiou, George K.; Parrila, Rauno; Cui, Ying; Papadopoulos, Timothy C.

    2013-01-01

    The objective of this study was to examine why rapid automatized naming (RAN) is related to reading by manipulating processes involved at the input, processing, and output stages of its production. In total, 65 children in Grade 2 and 65 in Grade 6 were assessed on serial and discrete RAN (Digits and Objects), Cancellation, RAN Yes/No, and oral…

  1. Children with Autism Spectrum Disorder (ASD) Attend Typically to Faces and Objects Presented within Their Picture Communication Systems

    ERIC Educational Resources Information Center

    Gillespie-Smith, K.; Riby, D. M.; Hancock, P. J. B.; Doherty-Sneddon, G.

    2014-01-01

    Background: Children with autism spectrum disorder (ASD) may require interventions for communication difficulties. One type of intervention is picture communication symbols which are proposed to improve comprehension of linguistic input for children with ASD. However, atypical attention to faces and objects is widely reported across the autism…

  2. Program/Project Management of Sponsored Programs in a University Environment.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Research Foundation.

    Management is a future-oriented decision process that relates resources into a total functional system for the accomplishment of a set of objectives. As a rule, universities do not have a management system, and there is no understanding of their environments in terms of the inputs, outputs, objectives, and organizational relationships of line and…

  3. History of nutrient inputs to the northeastern United States, 1930-2000

    NASA Astrophysics Data System (ADS)

    Hale, Rebecca L.; Hoover, Joseph H.; Wollheim, Wilfred M.; Vörösmarty, Charles J.

    2013-04-01

    Humans have dramatically altered nutrient cycles at local to global scales. We examined changes in anthropogenic nutrient inputs to the northeastern United States (NE) from 1930 to 2000. We created a comprehensive time series of anthropogenic N and P inputs to 437 counties in the NE at 5 year intervals. Inputs included atmospheric N deposition, biological N2 fixation, fertilizer, detergent P, livestock feed, and human food. Exports included exports of feed and food and volatilization of ammonia. N inputs to the NE increased throughout the study period, primarily due to increases in atmospheric deposition and fertilizer. P inputs increased until 1970 and then declined due to decreased fertilizer and detergent inputs. Livestock consistently consumed the majority of nutrient inputs over time and space. The area of crop agriculture declined during the study period but consumed more nutrients as fertilizer. We found that stoichiometry (N:P) of inputs and absolute amounts of N matched nutritional needs (livestock, humans, crops) when atmospheric components (N deposition, N2 fixation) were not included. Differences between N and P led to major changes in N:P stoichiometry over time, consistent with global trends. N:P decreased from 1930 to 1970 due to increased inputs of P, and increased from 1970 to 2000 due to increased N deposition and fertilizer and decreases in P fertilizer and detergent use. We found that nutrient use is a dynamic product of social, economic, political, and environmental interactions. Therefore, future nutrient management must take into account these factors to design successful and effective nutrient reduction measures.

  4. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  5. The IRMIS object model and services API.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Dohan, D. A.; Arnold, N. D.

    2005-01-01

    The relational model developed for the Integrated Relational Model of Installed Systems (IRMIS) toolkit has been successfully used to capture the Advanced Photon Source (APS) control system software (EPICS process variables and their definitions). The relational tables are populated by a crawler script that parses each Input/Output Controller (IOC) start-up file when an IOC reboot is detected. User interaction is provided by a Java Swing application that acts as a desktop for viewing the process variable information. Mapping between the display objects and the relational tables was carried out with the Hibernate Object Relational Modeling (ORM) framework. Work is wellmore » underway at the APS to extend the relational modeling to include control system hardware. For this work, due in part to the complex user interaction required, the primary application development environment has shifted from the relational database view to the object oriented (Java) perspective. With this approach, the business logic is executed in Java rather than in SQL stored procedures. This paper describes the object model used to represent control system software, hardware, and interconnects in IRMIS. We also describe the services API used to encapsulate the required behaviors for creating and maintaining the complex data. In addition to the core schema and object model, many important concepts in IRMIS are captured by the services API. IRMIS is an ambitious collaborative effort for defining and developing a relational database and associated applications to comprehensively document the large and complex EPICS-based control systems of today's accelerators. The documentation effort includes process variables, control system hardware, and interconnections. The approach could also be used to document all components of the accelerator, including mechanical, vacuum, power supplies, etc. One key aspect of IRMIS is that it is a documentation framework, not a design and development tool. We do not generate EPICS control system configurations from IRMIS, and hence do not impose any additional requirements on EPICS developers.« less

  6. General features of the retinal connectome determine the computation of motion anticipation

    PubMed Central

    Johnston, Jamie; Lagnado, Leon

    2015-01-01

    Motion anticipation allows the visual system to compensate for the slow speed of phototransduction so that a moving object can be accurately located. This correction is already present in the signal that ganglion cells send from the retina but the biophysical mechanisms underlying this computation are not known. Here we demonstrate that motion anticipation is computed autonomously within the dendritic tree of each ganglion cell and relies on feedforward inhibition. The passive and non-linear interaction of excitatory and inhibitory synapses enables the somatic voltage to encode the actual position of a moving object instead of its delayed representation. General rather than specific features of the retinal connectome govern this computation: an excess of inhibitory inputs over excitatory, with both being randomly distributed, allows tracking of all directions of motion, while the average distance between inputs determines the object velocities that can be compensated for. DOI: http://dx.doi.org/10.7554/eLife.06250.001 PMID:25786068

  7. Belief attribution in deaf and hearing infants.

    PubMed

    Meristo, Marek; Morgan, Gary; Geraci, Alessandra; Iozzi, Laura; Hjelmquist, Erland; Surian, Luca; Siegal, Michael

    2012-09-01

    Based on anticipatory looking and reactions to violations of expected events, infants have been credited with 'theory of mind' (ToM) knowledge that a person's search behaviour for an object will be guided by true or false beliefs about the object's location. However, little is known about the preconditions for looking patterns consistent with belief attribution in infants. In this study, we compared the performance of 17- to 26-month-olds on anticipatory looking in ToM tasks. The infants were either hearing or were deaf from hearing families and thus delayed in communicative experience gained from access to language and conversational input. Hearing infants significantly outperformed their deaf counterparts in anticipating the search actions of a cartoon character that held a false belief about a target-object location. By contrast, the performance of the two groups in a true belief condition did not differ significantly. These findings suggest for the first time that access to language and conversational input contributes to early ToM reasoning. © 2012 Blackwell Publishing Ltd.

  8. Possible role of brain stem respiratory neurons in mediating vomiting during space motion sickness

    NASA Technical Reports Server (NTRS)

    Miller, A. D.; Tan, L. K.

    1987-01-01

    The object of this study was to determine if brain stem expiratory neurons control abdominal muscle activity during vomiting. The activity of 27 ventral respiratory group expiratory neurons, which are known to be of primary importance for control of abdominal muscle activity during respiration, was recorded. It is concluded that abdominal muscle activity during vomiting must be controlled not only by some brain stem expiratory neurons but also by other input(s).

  9. Inputs and spatial distribution patterns of Cr in Jiaozhou Bay

    NASA Astrophysics Data System (ADS)

    Yang, Dongfang; Miao, Zhenqing; Huang, Xinmin; Wei, Linzhen; Feng, Ming

    2018-03-01

    Cr pollution in marine bays has been one of the critical environmental issues, and understanding the input and spatial distribution patterns is essential to pollution control. In according to the source strengths of the major pollution sources, the input patterns of pollutants to marine bay include slight, moderate and heavy, and the spatial distribution are corresponding to three block models respectively. This paper analyzed input patterns and distributions of Cr in Jiaozhou Bay, eastern China based on investigation on Cr in surface waters during 1979-1983. Results showed that the input strengths of Cr in Jiaozhou Bay could be classified as moderate input and slight input, and the input strengths were 32.32-112.30 μg L-1 and 4.17-19.76 μg L-1, respectively. The input patterns of Cr included two patterns of moderate input and slight input, and the horizontal distributions could be defined by means of Block Model 2 and Block Model 3, respectively. In case of moderate input pattern via overland runoff, Cr contents were decreasing from the estuaries to the bay mouth, and the distribution pattern was parallel. In case of moderate input pattern via marine current, Cr contents were decreasing from the bay mouth to the bay, and the distribution pattern was parallel to circular. The Block Models were able to reveal the transferring process of various pollutants, and were helpful to understand the distributions of pollutants in marine bay.

  10. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    NASA Astrophysics Data System (ADS)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  11. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  12. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  13. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  14. Feedback Synthesizes Neural Codes for Motion.

    PubMed

    Clarke, Stephen E; Maler, Leonard

    2017-05-08

    In senses as diverse as vision, hearing, touch, and the electrosense, sensory neurons receive bottom-up input from the environment, as well as top-down input from feedback loops involving higher brain regions [1-4]. Through connectivity with local inhibitory interneurons, these feedback loops can exert both positive and negative control over fundamental aspects of neural coding, including bursting [5, 6] and synchronous population activity [7, 8]. Here we show that a prominent midbrain feedback loop synthesizes a neural code for motion reversal in the hindbrain electrosensory ON- and OFF-type pyramidal cells. This top-down mechanism generates an accurate bidirectional encoding of object position, despite the inability of the electrosensory afferents to generate a consistent bottom-up representation [9, 10]. The net positive activity of this midbrain feedback is additionally regulated through a hindbrain feedback loop, which reduces stimulus-induced bursting and also dampens the ON and OFF cell responses to interfering sensory input [11]. We demonstrate that synthesis of motion representations and cancellation of distracting signals are mediated simultaneously by feedback, satisfying an accepted definition of spatial attention [12]. The balance of excitatory and inhibitory feedback establishes a "focal" distance for optimized neural coding, whose connection to a classic motion-tracking behavior provides new insight into the computational roles of feedback and active dendrites in spatial localization [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Identifying weaknesses in undergraduate programs within the context input process product model framework in view of faculty and library staff in 2014.

    PubMed

    Neyazi, Narges; Arab, Mohammad; Farzianpour, Freshteh; Mahmoudi, Mahmood

    2016-06-01

    Objective of this research is to find out weaknesses of undergraduate programs in terms of personnel and financial, organizational management and facilities in view of faculty and library staff, and determining factors that may facilitate program quality-improvement. This is a descriptive analytical survey research and from purpose aspect is an application evaluation study that undergraduate groups of selected faculties (Public Health, Nursing and Midwifery, Allied Medical Sciences and Rehabilitation) at Tehran University of Medical Sciences (TUMS) have been surveyed using context input process product model in 2014. Statistical population were consist of three subgroups including department head (n=10), faculty members (n=61), and library staff (n=10) with total population of 81 people. Data collected through three researcher-made questionnaires which were based on Likert scale. The data were then analyzed using descriptive and inferential statistics. Results showed desirable and relatively desirable situation for factors in context, input, process, and product fields except for factors of administration and financial; and research and educational spaces and equipment which were in undesirable situation. Based on results, researcher highlighted weaknesses in the undergraduate programs of TUMS in terms of research and educational spaces and facilities, educational curriculum, administration and financial; and recommended some steps in terms of financial, organizational management and communication with graduates in order to improve the quality of this system.

  17. Contribution to Terminology Internationalization by Word Alignment in Parallel Corpora

    PubMed Central

    Deléger, Louise; Merkel, Magnus; Zweigenbaum, Pierre

    2006-01-01

    Background and objectives Creating a complete translation of a large vocabulary is a time-consuming task, which requires skilled and knowledgeable medical translators. Our goal is to examine to which extent such a task can be alleviated by a specific natural language processing technique, word alignment in parallel corpora. We experiment with translation from English to French. Methods Build a large corpus of parallel, English-French documents, and automatically align it at the document, sentence and word levels using state-of-the-art alignment methods and tools. Then project English terms from existing controlled vocabularies to the aligned word pairs, and examine the number and quality of the putative French translations obtained thereby. We considered three American vocabularies present in the UMLS with three different translation statuses: the MeSH, SNOMED CT, and the MedlinePlus Health Topics. Results We obtained several thousand new translations of our input terms, this number being closely linked to the number of terms in the input vocabularies. Conclusion Our study shows that alignment methods can extract a number of new term translations from large bodies of text with a moderate human reviewing effort, and thus contribute to help a human translator obtain better translation coverage of an input vocabulary. Short-term perspectives include their application to a corpus 20 times larger than that used here, together with more focused methods for term extraction. PMID:17238328

  18. Global industrial impact coefficient based on random walk process and inter-country input-output table

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Dong, Xianlei; Guan, Jun

    2017-04-01

    Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

  19. Mathematical model of the competition life cycle under limited resources conditions: Problem statement for business community

    NASA Astrophysics Data System (ADS)

    Shelomentsev, A. G.; Medvedev, M. A.; Berg, D. B.; Lapshina, S. N.; Taubayev, A. A.; Davletbaev, R. H.; Savina, D. V.

    2017-12-01

    Present study is devoted to the development of competition life cycle mathematical model in the closed business community with limited resources. Growth of each agent is determined by the balance of input and output resource flows: input (cash) flow W is covering the variable V and constant C costs and growth dA/dt of the agent's assets A. Value of V is proportional to assets A that allows us to write down a first order non-stationary differential equation of the agent growth. Model includes the number of such equations due to the number of agents. The amount of resources that is available for agents vary in time. The balances of their input and output flows are changing correspondingly to the different stages of the competition life cycle. According to the theory of systems, the most complete description of any object or process is the model of its life cycle. Such a model describes all stages of its development: from the appearance ("birth") through development ("growth") to extinction ("death"). The model of the evolution of an individual firm, not contradicting the economic meaning of events actually observed in the market, is the desired result from modern AVMs for applied use. With a correct description of the market, rules for participants' actions, restrictions, forecasts can be obtained, which modern mathematics and the economy can not give.

  20. Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram

    NASA Astrophysics Data System (ADS)

    Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.

    1987-01-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  1. Parents' Translations of Child Gesture Facilitate Word Learning in Children with Autism, Down Syndrome and Typical Development

    PubMed Central

    Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B.

    2016-01-01

    Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al., 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with ASD, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders. PMID:26362150

  2. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  3. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner

    PubMed Central

    Coppola, Marie; Brentari, Diane

    2014-01-01

    Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent (“hand-as-hand” iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent (“hand-as-object” iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, “Julio,” age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign. PMID:25191283

  4. Enhanced local tomography

    DOEpatents

    Katsevich, Alexander J.; Ramm, Alexander G.

    1996-01-01

    Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values.

  5. Evaluating the effects of dam breach methodologies on Consequence Estimation through Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Kalyanapu, A. J.; Thames, B. A.

    2013-12-01

    Dam breach modeling often includes application of models that are sophisticated, yet computationally intensive to compute flood propagation at high temporal and spatial resolutions. This results in a significant need for computational capacity that requires development of newer flood models using multi-processor and graphics processing techniques. Recently, a comprehensive benchmark exercise titled the 12th Benchmark Workshop on Numerical Analysis of Dams, is organized by the International Commission on Large Dams (ICOLD) to evaluate the performance of these various tools used for dam break risk assessment. The ICOLD workshop is focused on estimating the consequences of failure of a hypothetical dam near a hypothetical populated area with complex demographics, and economic activity. The current study uses this hypothetical case study and focuses on evaluating the effects of dam breach methodologies on consequence estimation and analysis. The current study uses ICOLD hypothetical data including the topography, dam geometric and construction information, land use/land cover data along with socio-economic and demographic data. The objective of this study is to evaluate impacts of using four different dam breach methods on the consequence estimates used in the risk assessments. The four methodologies used are: i) Froehlich (1995), ii) MacDonald and Langridge-Monopolis 1984 (MLM), iii) Von Thun and Gillete 1990 (VTG), and iv) Froehlich (2008). To achieve this objective, three different modeling components were used. First, using the HEC-RAS v.4.1, dam breach discharge hydrographs are developed. These hydrographs are then provided as flow inputs into a two dimensional flood model named Flood2D-GPU, which leverages the computer's graphics card for much improved computational capabilities of the model input. Lastly, outputs from Flood2D-GPU, including inundated areas, depth grids, velocity grids, and flood wave arrival time grids, are input into HEC-FIA, which provides the consequence assessment for the solution to the problem statement. For the four breach methodologies, a sensitivity analysis of four breach parameters, breach side slope (SS), breach width (Wb), breach invert elevation (Elb), and time of failure (tf), is conducted. Up to, 68 simulations are computed to produce breach hydrographs in HEC-RAS for input into Flood2D-GPU. The Flood2D-GPU simulation results were then post-processed in HEC-FIA to evaluate: Total Population at Risk (PAR), 14-yr and Under PAR (PAR14-), 65-yr and Over PAR (PAR65+), Loss of Life (LOL) and Direct Economic Impact (DEI). The MLM approach resulted in wide variability in simulated minimum and maximum values of PAR, PAR 65+ and LOL estimates. For PAR14- and DEI, Froehlich (1995) resulted in lower values while MLM resulted in higher estimates. This preliminary study demonstrated the relative performance of four commonly used dam breach methodologies and their impacts on consequence estimation.

  6. Developing CORBA-Based Distributed Scientific Applications From Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche; Kim, Chan; Lopez, Isaac

    2000-01-01

    An efficient methodology is presented for integrating legacy applications written in Fortran into a distributed object framework. Issues and strategies regarding the conversion and decomposition of Fortran codes into Common Object Request Broker Architecture (CORBA) objects are discussed. Fortran codes are modified as little as possible as they are decomposed into modules and wrapped as objects. A new conversion tool takes the Fortran application as input and generates the C/C++ header file and Interface Definition Language (IDL) file. In addition, the performance of the client server computing is evaluated.

  7. Constraints to microbial food safety policy: opinions from stakeholder groups along the farm to fork continuum.

    PubMed

    Sargeant, J M; Ramsingh, B; Wilkins, A; Travis, R G; Gavrus, D; Snelgrove, J W

    2007-01-01

    This exploratory qualitative study was conducted to identify constraints to microbial food safety policy in Canada and the USA from the perspective of stakeholder groups along the farm to fork continuum. Thirty-seven stakeholders participated in interviews or a focus group where semi-structured questions were used to facilitate discussion about constraints to policy development and implementation. An emergent grounded theory approach was used to determine themes and concepts that arose from the data (versus fitting the data to a hypothesis or a priori classification). Despite the plurality of stakeholders and the range of content expertise, participant perceptions emerged into five common themes, although, there were often disagreements as to the positive or negative attributes of specific concepts. The five themes included challenges related to measurement and objectives of microbial food safety policy goals, challenges arising from lack of knowledge, or problems with communication of knowledge coupled with current practices, beliefs and traditions; the complexity of the food system and the plurality of stakeholders; the economics of producing safe food and the limited resources to address the problem; and, issues related to decision-making and policy, including ownership of the problem and inappropriate inputs to the decision-making process. Responsibilities for food safety and for food policy failure were attributed to all stakeholders along the farm to fork continuum. While challenges regarding the biology of food safety were identified as constraints, a broader range of policy inputs encompassing social, economic and political considerations were also highlighted as critical to the development and implementation of effective food safety policy. Strategies to address these other inputs may require new, transdisciplinary approaches as an adjunct to the traditional science-based risk assessment model.

  8. Agronomic Challenges and Opportunities for Smallholder Terrace Agriculture in Developing Countries.

    PubMed

    Chapagain, Tejendra; Raizada, Manish N

    2017-01-01

    Improving land productivity is essential to meet increasing food and forage demands in hillside and mountain communities. Tens of millions of smallholder terrace farmers in Asia, Africa, and Latin America who earn $1-2 per day do not have access to peer-reviewed knowledge of best agronomic practices, though they have considerable traditional ecological knowledge. Terrace farmers also lack access to affordable farm tools and inputs required to increase crop yields. The objectives of this review are to highlight the agronomic challenges of terrace farming, and offer innovative, low-cost solutions to intensify terrace agriculture while improving local livelihoods. The article focuses on smallholder farmers in developing nations, with particular reference to Nepal. The challenges of terrace agriculture in these regions include lack of quality land area for agriculture, erosion and loss of soil fertility, low yield, poor access to agricultural inputs and services, lack of mechanization, labor shortages, poverty, and illiteracy. Agronomic strategies that could help address these concerns include intensification of terraces using agro-ecological approaches along with introduction of light-weight, low-cost, and purchasable tools and affordable inputs that enhance productivity and reduce female drudgery. To package, deliver, and share these technologies with remote hillside communities, effective scaling up models are required. One opportunity to enable distribution of these products could be to "piggy-back" onto pre-existing snackfood/cigarette/alcohol distribution networks that are prevalent even in the most remote mountainous regions of the world. Such strategies, practices, and tools could be supported by formalized government policies dedicated to the well-being of terrace farmers and ecosystems, to maintain resiliency at a time of alarming climate change. We hope this review will inform governments, non-governmental organizations, and the private sector to draw attention to this neglected and vulnerable agro-ecosystem in developing countries.

  9. Method and Apparatus for Reducing the Vulnerability of Latches to Single Event Upsets

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2002-01-01

    A delay circuit includes a first network having an input and an output node, a second network having an input and an output, the input of the second network being coupled to the output node of the first network. The first network and the second network are configured such that: a glitch at the input to the first network having a length of approximately one-half of a standard glitch time or less does not cause the voltage at the output of the second network to cross a threshold, a glitch at the input to the first network having a length of between approximately one-half and two standard glitch times causes the voltage at the output of the second network to cross the threshold for less than the length of the glitch, and a glitch at the input to the first network having a length of greater than approximately two standard glitch times causes the voltage at the output of the second network to cross the threshold for approximately the time of the glitch. The method reduces the vulnerability of a latch to single event upsets. The latch includes a gate having an input and an output and a feedback path from the output to the input of the gate. The method includes inserting a delay into the feedback path and providing a delay in the gate.

  10. Method and Apparatus for Reducing the Vulnerability of Latches to Single Event Upsets

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr. (Inventor)

    2002-01-01

    A delay circuit includes a first network having an input and an output node, a second network having an input and an output, the input of the second network being coupled to the output node of the first network. The first network and the second network are configured such that: a glitch at the input to the first network having a length of approximately one-half of a standard glitch time or less does not cause tile voltage at the output of the second network to cross a threshold, a glitch at the input to the first network having a length of between approximately one-half and two standard glitch times causes the voltage at the output of the second network to cross the threshold for less than the length of the glitch, and a glitch at the input to the first network having a length of greater than approximately two standard glitch times causes the voltage at the output of the second network to cross the threshold for approximately the time of the glitch. A method reduces the vulnerability of a latch to single event upsets. The latch includes a gate having an input and an output and a feedback path from the output to the input of the gate. The method includes inserting a delay into the feedback path and providing a delay in the gate.

  11. Moving object localization using optical flow for pedestrian detection from a moving vehicle.

    PubMed

    Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun

    2014-01-01

    This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.

  12. New neural-networks-based 3D object recognition system

    NASA Astrophysics Data System (ADS)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  13. Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo.

    PubMed

    Yeh, Chih-Kuo; Huang, Shi-Yang; Jayaraman, Pradeep Kumar; Fu, Chi-Wing; Lee, Tong-Yee

    2017-07-01

    We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

  14. The Peaceful Co-existence of Input Frequency and Structural Intervention Effects on the Comprehension of Complex Sentences in German-Speaking Children

    PubMed Central

    Adani, Flavia; Stegenwallner-Schütz, Maja; Niesel, Talea

    2017-01-01

    The predictions of two contrasting approaches to the acquisition of transitive relative clauses were tested within the same groups of German-speaking participants aged from 3 to 5 years old. The input frequency approach predicts that object relative clauses with inanimate heads (e.g., the pullover that the man is scratching) are comprehended earlier and more accurately than those with an animate head (e.g., the man that the boy is scratching). In contrast, the structural intervention approach predicts that object relative clauses with two full NP arguments mismatching in number (e.g., the man that the boys are scratching) are comprehended earlier and more accurately than those with number-matching NPs (e.g., the man that the boy is scratching). These approaches were tested in two steps. First, we ran a corpus analysis to ensure that object relative clauses with number-mismatching NPs are not more frequent than object relative clauses with number-matching NPs in child directed speech. Next, the comprehension of these structures was tested experimentally in 3-, 4-, and 5-year-olds respectively by means of a color naming task. By comparing the predictions of the two approaches within the same participant groups, we were able to uncover that the effects predicted by the input frequency and by the structural intervention approaches co-exist and that they both influence the performance of children on transitive relative clauses, but in a manner that is modulated by age. These results reveal a sensitivity to animacy mismatch already being demonstrated by 3-year-olds and show that animacy is initially deployed more reliably than number to interpret relative clauses correctly. In all age groups, the animacy mismatch appears to explain the performance of children, thus, showing that the comprehension of frequent object relative clauses is enhanced compared to the other conditions. Starting with 4-year-olds but especially in 5-year-olds, the number mismatch supported comprehension—a facilitation that is unlikely to be driven by input frequency. Once children fine-tune their sensitivity to verb agreement information around the age of four, they are also able to deploy number marking to overcome the intervention effects. This study highlights the importance of testing experimentally contrasting theoretical approaches in order to characterize the multifaceted, developmental nature of language acquisition. PMID:29033863

  15. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  16. The control of voluntary eye movements: new perspectives.

    PubMed

    Krauzlis, Richard J

    2005-04-01

    Primates use two types of voluntary eye movements to track objects of interest: pursuit and saccades. Traditionally, these two eye movements have been viewed as distinct systems that are driven automatically by low-level visual inputs. However, two sets of findings argue for a new perspective on the control of voluntary eye movements. First, recent experiments have shown that pursuit and saccades are not controlled by entirely different neural pathways but are controlled by similar networks of cortical and subcortical regions and, in some cases, by the same neurons. Second, pursuit and saccades are not automatic responses to retinal inputs but are regulated by a process of target selection that involves a basic form of decision making. The selection process itself is guided by a variety of complex processes, including attention, perception, memory, and expectation. Together, these findings indicate that pursuit and saccades share a similar functional architecture. These points of similarity may hold the key for understanding how neural circuits negotiate the links between the many higher order functions that can influence behavior and the singular and coordinated motor actions that follow.

  17. Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.

  18. Planning for robust reserve networks using uncertainty analysis

    USGS Publications Warehouse

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  19. Design, manufacture and spin test of high contact ratio helicopter transmission utilizing Self-Aligning Bearingless Planetary (SABP)

    NASA Technical Reports Server (NTRS)

    Folenta, Dezi; Lebo, William

    1988-01-01

    A 450 hp high ratio Self-Aligning Bearingless Planetary (SABP) for a helicopter application was designed, manufactured, and spin tested under NASA contract NAS3-24539. The objective of the program was to conduct research and development work on a high contact ratio helical gear SABP to reduce weight and noise and to improve efficiency. The results accomplished include the design, manufacturing, and no-load spin testing of two prototype helicopter transmissions, rated at 450 hp with an input speed of 35,000 rpm and an output speed of 350 rpm. The weight power density ratio of these gear units is 0.33 lb hp. The measured airborne noise at 35,000 rpm input speed and light load is 94 dB at 5 ft. The high speed, high contact ratio SABP transmission appears to be significantly lighter and quieter than comtemporary helicopter transmissions. The concept of the SABP is applicable not only to high ratio helicopter type transmissions but also to other rotorcraft and aircraft propulsion systems.

  20. Bimodal Bilinguals Co-activate Both Languages during Spoken Comprehension

    PubMed Central

    Shook, Anthony; Marian, Viorica

    2012-01-01

    Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals’ and English monolinguals’ eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing items than at phonologically unrelated items, and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension. PMID:22770677

  1. Coral Reef Remote Sensing Using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.; Blonski, Slawomir; Moore, Roxzana

    2008-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems-the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM)- might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA s ICON/CREWS DST.

  2. Coral Reef Remote Sensing using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM) might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA's ICON/CREWS DST.

  3. Reconstruction of nonlinear wave propagation

    DOEpatents

    Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie

    2013-04-23

    Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.

  4. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the derivatives.

  5. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  6. Galaxy And Mass Assembly (GAMA): end of survey report and data release 2

    NASA Astrophysics Data System (ADS)

    Liske, J.; Baldry, I. K.; Driver, S. P.; Tuffs, R. J.; Alpaslan, M.; Andrae, E.; Brough, S.; Cluver, M. E.; Grootes, M. W.; Gunawardhana, M. L. P.; Kelvin, L. S.; Loveday, J.; Robotham, A. S. G.; Taylor, E. N.; Bamford, S. P.; Bland-Hawthorn, J.; Brown, M. J. I.; Drinkwater, M. J.; Hopkins, A. M.; Meyer, M. J.; Norberg, P.; Peacock, J. A.; Agius, N. K.; Andrews, S. K.; Bauer, A. E.; Ching, J. H. Y.; Colless, M.; Conselice, C. J.; Croom, S. M.; Davies, L. J. M.; De Propris, R.; Dunne, L.; Eardley, E. M.; Ellis, S.; Foster, C.; Frenk, C. S.; Häußler, B.; Holwerda, B. W.; Howlett, C.; Ibarra, H.; Jarvis, M. J.; Jones, D. H.; Kafle, P. R.; Lacey, C. G.; Lange, R.; Lara-López, M. A.; López-Sánchez, Á. R.; Maddox, S.; Madore, B. F.; McNaught-Roberts, T.; Moffett, A. J.; Nichol, R. C.; Owers, M. S.; Palamara, D.; Penny, S. J.; Phillipps, S.; Pimbblet, K. A.; Popescu, C. C.; Prescott, M.; Proctor, R.; Sadler, E. M.; Sansom, A. E.; Seibert, M.; Sharp, R.; Sutherland, W.; Vázquez-Mata, J. A.; van Kampen, E.; Wilkins, S. M.; Williams, R.; Wright, A. H.

    2015-09-01

    The Galaxy And Mass Assembly (GAMA) survey is one of the largest contemporary spectroscopic surveys of low redshift galaxies. Covering an area of ˜286 deg2 (split among five survey regions) down to a limiting magnitude of r < 19.8 mag, we have collected spectra and reliable redshifts for 238 000 objects using the AAOmega spectrograph on the Anglo-Australian Telescope. In addition, we have assembled imaging data from a number of independent surveys in order to generate photometry spanning the wavelength range 1 nm-1 m. Here, we report on the recently completed spectroscopic survey and present a series of diagnostics to assess its final state and the quality of the redshift data. We also describe a number of survey aspects and procedures, or updates thereof, including changes to the input catalogue, redshifting and re-redshifting, and the derivation of ultraviolet, optical and near-infrared photometry. Finally, we present the second public release of GAMA data. In this release, we provide input catalogue and targeting information, spectra, redshifts, ultraviolet, optical and near-infrared photometry, single-component Sérsic fits, stellar masses, Hα-derived star formation rates, environment information, and group properties for all galaxies with r < 19.0 mag in two of our survey regions, and for all galaxies with r < 19.4 mag in a third region (72 225 objects in total). The data base serving these data is available at http://www.gama-survey.org/.

  7. Sensory convergence in the parieto-insular vestibular cortex

    PubMed Central

    Shinder, Michael E.

    2014-01-01

    Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533

  8. Realtime automatic metal extraction of medical x-ray images for contrast improvement

    NASA Astrophysics Data System (ADS)

    Prangl, Martin; Hellwagner, Hermann; Spielvogel, Christian; Bischof, Horst; Szkaliczki, Tibor

    2006-03-01

    This paper focuses on an approach for real-time metal extraction of x-ray images taken from modern x-ray machines like C-arms. Such machines are used for vessel diagnostics, surgical interventions, as well as cardiology, neurology and orthopedic examinations. They are very fast in taking images from different angles. For this reason, manual adjustment of contrast is infeasible and automatic adjustment algorithms have been applied to try to select the optimal radiation dose for contrast adjustment. Problems occur when metallic objects, e.g., a prosthesis or a screw, are in the absorption area of interest. In this case, the automatic adjustment mostly fails because the dark, metallic objects lead the algorithm to overdose the x-ray tube. This outshining effect results in overexposed images and bad contrast. To overcome this limitation, metallic objects have to be detected and extracted from images that are taken as input for the adjustment algorithm. In this paper, we present a real-time solution for extracting metallic objects of x-ray images. We will explore the characteristic features of metallic objects in x-ray images and their distinction from bone fragments which form the basis to find a successful way for object segmentation and classification. Subsequently, we will present our edge based real-time approach for successful and fast automatic segmentation and classification of metallic objects. Finally, experimental results on the effectiveness and performance of our approach based on a vast amount of input image data sets will be presented.

  9. Quantity and Quality of Caregivers' Linguistic Input to 18-Month and 3-Year-Old Children Who Are Hard of Hearing.

    PubMed

    Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat

    2015-01-01

    The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.

  10. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  11. Development a computer codes to couple PWR-GALE output and PC-CREAM input

    NASA Astrophysics Data System (ADS)

    Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.

  12. Canonical multi-valued input Reed-Muller trees and forms

    NASA Technical Reports Server (NTRS)

    Perkowski, M. A.; Johnson, P. D.

    1991-01-01

    There is recently an increased interest in logic synthesis using EXOR gates. The paper introduces the fundamental concept of Orthogonal Expansion, which generalizes the ring form of the Shannon expansion to the logic with multiple-valued (mv) inputs. Based on this concept we are able to define a family of canonical tree circuits. Such circuits can be considered for binary and multiple-valued input cases. They can be multi-level (trees and DAG's) or flattened to two-level AND-EXOR circuits. Input decoders similar to those used in Sum of Products (SOP) PLA's are used in realizations of multiple-valued input functions. In the case of the binary logic the family of flattened AND-EXOR circuits includes several forms discussed by Davio and Green. For the case of the logic with multiple-valued inputs, the family of the flattened mv AND-EXOR circuits includes three expansions known from literature and two new expansions.

  13. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  14. Visual Predictions in the Orbitofrontal Cortex Rely on Associative Content

    PubMed Central

    Chaumon, Maximilien; Kveraga, Kestutis; Barrett, Lisa Feldman; Bar, Moshe

    2014-01-01

    Predicting upcoming events from incomplete information is an essential brain function. The orbitofrontal cortex (OFC) plays a critical role in this process by facilitating recognition of sensory inputs via predictive feedback to sensory cortices. In the visual domain, the OFC is engaged by low spatial frequency (LSF) and magnocellular-biased inputs, but beyond this, we know little about the information content required to activate it. Is the OFC automatically engaged to analyze any LSF information for meaning? Or is it engaged only when LSF information matches preexisting memory associations? We tested these hypotheses and show that only LSF information that could be linked to memory associations engages the OFC. Specifically, LSF stimuli activated the OFC in 2 distinct medial and lateral regions only if they resembled known visual objects. More identifiable objects increased activity in the medial OFC, known for its function in affective responses. Furthermore, these objects also increased the connectivity of the lateral OFC with the ventral visual cortex, a crucial region for object identification. At the interface between sensory, memory, and affective processing, the OFC thus appears to be attuned to the associative content of visual information and to play a central role in visuo-affective prediction. PMID:23771980

  15. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

  16. Influence Diagram Use With Respect to Technology Planning and Investment

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.; DeHoff, Bryan; Rhodes, Russel E.

    2009-01-01

    Influence diagrams are relatively simple, but powerful, tools for assessing the impact of choices or resource allocations on goals or requirements. They are very general and can be used on a wide range of problems. They can be used for any problem that has defined goals, a set of factors that influence the goals or the other factors, and a set of inputs. Influence diagrams show the relationship among a set of results and the attributes that influence them and the inputs that influence the attributes. If the results are goals or requirements of a program, then the influence diagram can be used to examine how the requirements are affected by changes to technology investment. This paper uses an example to show how to construct and interpret influence diagrams, how to assign weights to the inputs and attributes, how to assign weights to the transfer functions (influences), and how to calculate the resulting influences of the inputs on the results. A study is also presented as an example of how using influence diagrams can help in technology planning and investment. The Space Propulsion Synergy Team (SPST) used this technique to examine the impact of R&D spending on the Life Cycle Cost (LCC) of a space transportation system. The question addressed was the effect on the recurring and the non-recurring portions of LCC of the proportion of R&D resources spent to impact technology objectives versus the proportion spent to impact operational dependability objectives. The goals, attributes, and the inputs were established. All of the linkages (influences) were determined. The weighting of each of the attributes and each of the linkages was determined. Finally the inputs were varied and the impacts on the LCC determined and are presented. The paper discusses how each of these was accomplished both for credibility and as an example for future studies using influence diagrams for technology planning and investment planning.

  17. An Advanced Computational Approach to System of Systems Analysis & Architecting Using Agent-Based Behavioral Model

    DTIC Science & Technology

    2012-09-30

    System N Agent « datatype » SoS Architecture -Receives Capabilities1 -Provides Capabilities1 1 -Provides Capabilities1 1 -Provides Capabilities1 -Updates 1...fitness, or objective function. The structure of the SoS Agent is depicted in Figure 10. SoS Agent Architecture « datatype » Initial SoS...Architecture «subsystem» Fuzzy Inference Engine FAM « datatype » Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input

  18. Implementation of input command shaping to reduce vibration in flexible space structures

    NASA Technical Reports Server (NTRS)

    Chang, Kenneth W.; Seering, Warren P.; Rappole, B. Whitney

    1992-01-01

    Viewgraphs on implementation of input command shaping to reduce vibration in flexible space structures are presented. Goals of the research are to explore theory of input command shaping to find an efficient algorithm for flexible space structures; to characterize Middeck Active Control Experiment (MACE) test article; and to implement input shaper on the MACE structure and interpret results. Background on input shaping, simulation results, experimental results, and future work are included.

  19. Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition

    PubMed Central

    Lallee, Stephane; Madden, Carol; Hoen, Michel; Dominey, Peter Ford

    2010-01-01

    The current research extends our framework for embodied language and action comprehension to include a teleological representation that allows goal-based reasoning for novel actions. The objective of this work is to implement and demonstrate the advantages of a hybrid, embodied-teleological approach to action–language interaction, both from a theoretical perspective, and via results from human–robot interaction experiments with the iCub robot. We first demonstrate how a framework for embodied language comprehension allows the system to develop a baseline set of representations for processing goal-directed actions such as “take,” “cover,” and “give.” Spoken language and visual perception are input modes for these representations, and the generation of spoken language is the output mode. Moving toward a teleological (goal-based reasoning) approach, a crucial component of the new system is the representation of the subcomponents of these actions, which includes relations between initial enabling states, and final resulting states for these actions. We demonstrate how grammatical categories including causal connectives (e.g., because, if–then) can allow spoken language to enrich the learned set of state-action-state (SAS) representations. We then examine how this enriched SAS inventory enhances the robot's ability to represent perceived actions in which the environment inhibits goal achievement. The paper addresses how language comes to reflect the structure of action, and how it can subsequently be used as an input and output vector for embodied and teleological aspects of action. PMID:20577629

  20. Semi-supervised tracking of extreme weather events in global spatio-temporal climate datasets

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Prabhat, M.; Williams, D. N.

    2017-12-01

    Deep neural networks have been successfully applied to solve problem to detect extreme weather events in large scale climate datasets and attend superior performance that overshadows all previous hand-crafted methods. Recent work has shown that multichannel spatiotemporal encoder-decoder CNN architecture is able to localize events in semi-supervised bounding box. Motivated by this work, we propose new learning metric based on Variational Auto-Encoders (VAE) and Long-Short-Term-Memory (LSTM) to track extreme weather events in spatio-temporal dataset. We consider spatio-temporal object tracking problems as learning probabilistic distribution of continuous latent features of auto-encoder using stochastic variational inference. For this, we assume that our datasets are i.i.d and latent features is able to be modeled by Gaussian distribution. In proposed metric, we first train VAE to generate approximate posterior given multichannel climate input with an extreme climate event at fixed time. Then, we predict bounding box, location and class of extreme climate events using convolutional layers given input concatenating three features including embedding, sampled mean and standard deviation. Lastly, we train LSTM with concatenated input to learn timely information of dataset by recurrently feeding output back to next time-step's input of VAE. Our contribution is two-fold. First, we show the first semi-supervised end-to-end architecture based on VAE to track extreme weather events which can apply to massive scaled unlabeled climate datasets. Second, the information of timely movement of events is considered for bounding box prediction using LSTM which can improve accuracy of localization. To our knowledge, this technique has not been explored neither in climate community or in Machine Learning community.

  1. Conceptualizing the role of sediment in sustaining ecosystem services: Sediment-ecosystem regional assessment (SEcoRA).

    PubMed

    Apitz, Sabine E

    2012-01-15

    There is a growing trend to include a consideration of ecosystem services, the benefits that people obtain from ecosystems, within decision frameworks. Not more than a decade ago, sediment management efforts were largely site-specific and held little attention except in terms of managing contaminant inputs and addressing sediments as a nuisance at commercial ports and harbors. Sediments figure extensively in the Millennium Ecosystem Assessment; however, contaminated sediment is not the dominant concern. Rather, the focus is on land and water use and management on the landscape scale, which can profoundly affect soil and sediment quality, quantity and fate. Habitat change and loss, due to changes in sediment inputs, whether reductions (resulting in the loss of beaches, storm protection, nutrient inputs, etc.) or increases (resulting in lake, reservoir and wetland infilling, coral reef smothering, etc.); eutrophication and reductions in nutrient inputs, and disturbance due to development and fishing practices are considered major drivers, with significant consequences for biodiversity and the provision and resilience of ecosystem functions and services. As a mobile connecting medium between various parts of the ecosystem via the hydrocycle, sediments both contaminated and uncontaminated, play both positive and negative roles in the viability and sustainability of social, economic, and ecological objectives. How these roles are interpreted depends upon whether sediment status (defined in terms of sediment quality, quantity, location and transport) is appropriate to the needs of a given endpoint; understanding and managing the dynamic interactions of sediment status on a diverse range of endpoints at the landscape or watershed scale should be the focus of sediment management. This paper seeks to provide a language and conceptual framework upon which sediment-ecosystem regional assessments (SEcoRAs) can be developed in support of that goal. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Capturing the Patient’s Experience: Using Qualitative Methods to Develop a Measure of Patient-Reported Symptom Burden: An Example from Ovarian Cancer

    PubMed Central

    Williams, Loretta A.; Agarwal, Sonika; Bodurka, Diane C.; Saleeba, Angele K.; Sun, Charlotte C.; Cleeland, Charles S.

    2013-01-01

    Context Experts in patient-reported outcome (PRO) measurement emphasize the importance of including patient input in the development of PRO measures. Although best methods for acquiring this input are not yet identified, patient input early in instrument development ensures that instrument content captures information most important and relevant to patients in understandable terms. Objectives The M. D. Anderson Symptom Inventory (MDASI) is a reliable, valid PRO instrument for assessing cancer symptom burden. We report a qualitative (open-ended, in-depth) interviewing method that can be used to incorporate patient input into PRO symptom measure development, with our experience in constructing a MDASI module for ovarian cancer (MDASI-OC) as a model. Methods Fourteen patients with ovarian cancer (OC) described symptoms experienced at the time of the study, at diagnosis, and during prior treatments. Researchers and clinicians used content analysis of interview transcripts to identify symptoms in patient language. Symptoms were ranked on the basis of the number of patients mentioning them and by clinician assessment of relevance. Results Forty-two symptoms were mentioned. Eight OC-specific items will be added to the 13 core symptom items and six interference items of the MDASI in a test version of the MDASI-OC based on the number of patients mentioning them and clinician assessment of importance. The test version is undergoing psychometric evaluation. Conclusion The qualitative interviewing process, used to develop the test MDASI-OC, systematically captures common symptoms important to patients with ovarian cancer. This methodology incorporates the patient experience recommended by experts in PRO instrument development. PMID:23615044

  3. Electrometer Amplifier With Overload Protection

    NASA Technical Reports Server (NTRS)

    Woeller, F. H.; Alexander, R.

    1986-01-01

    Circuit features low noise, input offset, and high linearity. Input preamplifier includes input-overload protection and nulling circuit to subtract dc offset from output. Prototype dc amplifier designed for use with ion detector has features desirable in general laboratory and field instrumentation.

  4. Input analysis for two public consultations on the EU Clinical Trials Regulation.

    PubMed

    Langhof, Holger; Lander, Jonas; Strech, Daniel

    2016-09-17

    The European Union's (EU) Clinical Trials Directive was replaced by an EU-Regulation as of 2016. The policy revision process was subject to a formal impact assessment exercised by the European Commission (EC) from 2008 to 2014. Following the EU principles of Good Governance, deliberation with stakeholders was an integral part of this impact assessment and the policy formulation process. Hence, two public consultations (PCs) were held by the EC in 2009 and 2011, respectively. Various stakeholders contributed and submitted their written input to the EC. Though often cited in the further revision process, the input gathered in the PC was not communicated with full transparency and it is unclear how and to what extent the input has been processed and used in the policy formulation. The objective of this study was an analysis of submissions to both PCs in order to systematically present what topics have been discussed and which possible policy options have been raised by the stakeholders. All written submissions publicly available were downloaded from the EC's homepage and assessed for stakeholder characteristics. Thematic text analysis was applied to assess the full text of a random sample of 33% of these submissions. A total of 198 different stakeholders from the EU and the United States of America contributed to one or both of the two PCs. In total, 44 various themes have been addressed that could be clustered under 24 main themes, including the articulation of problems as well as possible policy solutions to face these problems. The two PCs on the Clinical Trials Directive were highly appreciated by the various stakeholders and their input allowed an in-depth view on their particular interests. This input provided a rich source of information for all stakeholders in the field of clinical trials as well as to the EC's impact assessment. Although the EC obviously gathered a large quantity of expert knowledge on practical implications of trials legislation by consulting stakeholders, it remained unclear how this input was used in the development of the new regulation. For the sake of transparency, it is recommended that in future PCs the EC uses better standardized methods for a more transparent analysis and presentation of results.

  5. Progressively expanded neural network for automatic material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Paheding, Sidike

    The science of hyperspectral remote sensing focuses on the exploitation of the spectral signatures of various materials to enhance capabilities including object detection, recognition, and material characterization. Hyperspectral imagery (HSI) has been extensively used for object detection and identification applications since it provides plenty of spectral information to uniquely identify materials by their reflectance spectra. HSI-based object detection algorithms can be generally classified into stochastic and deterministic approaches. Deterministic approaches are comparatively simple to apply since it is usually based on direct spectral similarity such as spectral angles or spectral correlation. In contrast, stochastic algorithms require statistical modeling and estimation for target class and non-target class. Over the decades, many single class object detection methods have been proposed in the literature, however, deterministic multiclass object detection in HSI has not been explored. In this work, we propose a deterministic multiclass object detection scheme, named class-associative spectral fringe-adjusted joint transform correlation. Human brain is capable of simultaneously processing high volumes of multi-modal data received every second of the day. In contrast, a machine sees input data simply as random binary numbers. Although machines are computationally efficient, they are inferior when comes to data abstraction and interpretation. Thus, mimicking the learning strength of human brain has been current trend in artificial intelligence. In this work, we present a biological inspired neural network, named progressively expanded neural network (PEN Net), based on nonlinear transformation of input neurons to a feature space for better pattern differentiation. In PEN Net, discrete fixed excitations are disassembled and scattered in the feature space as a nonlinear line. Each disassembled element on the line corresponds to a pattern with similar features. Unlike the conventional neural network where hidden neurons need to be iteratively adjusted to achieve better accuracy, our proposed PEN Net does not require hidden neurons tuning which achieves better computational efficiency, and it has also shown superior performance in HSI classification tasks compared to the state-of-the-arts. Spectral-spatial features based HSI classification framework has shown stronger strength compared to spectral-only based methods. In our lastly proposed technique, PEN Net is incorporated with multiscale spatial features (i.e., multiscale complete local binary pattern) to perform a spectral-spatial classification of HSI. Several experiments demonstrate excellent performance of our proposed technique compared to the more recent developed approaches.

  6. Passively damped vibration welding system and method

    DOEpatents

    Tan, Chin-An; Kang, Bongsu; Cai, Wayne W.; Wu, Tao

    2013-04-02

    A vibration welding system includes a controller, welding horn, an anvil, and a passive damping mechanism (PDM). The controller generates an input signal having a calibrated frequency. The horn vibrates in a desirable first direction at the calibrated frequency in response to the input signal to form a weld in a work piece. The PDM is positioned with respect to the system, and substantially damps or attenuates vibration in an undesirable second direction. A method includes connecting the PDM having calibrated properties and a natural frequency to an anvil of an ultrasonic welding system. Then, an input signal is generated using a weld controller. The method includes vibrating a welding horn in a desirable direction in response to the input signal, and passively damping vibration in an undesirable direction using the PDM.

  7. EM61-MK2 Response of Standard Munition Items

    DTIC Science & Technology

    2008-10-06

    metallic objects in the vicinity of the sensor. The decay of this induced field is sensed by monitoring the current in a wire-loop receiver coil in four...response are selected and marked as potential metal targets. This initial list of anomalies is used as input to an analysis step that selects anomalies... metal objects are left un-remediated but we are confident that the objects responsible for the anomaly have a smaller response than any of our targets

  8. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  9. Uptake Index of 123I-metaiodobenzylguanidine Myocardial Scintigraphy for Diagnosing Lewy Body Disease

    PubMed Central

    Kamiya, Yoshito; Ota, Satoru; Okumiya, Shintaro; Yamashita, Kosuke; Takaki, Akihiro; Ito, Shigeki

    2017-01-01

    Objective(s): Iodine-123 metaiodobenzylguanidine (123I-MIBG) myocardial scintigraphy has been used to evaluate cardiac sympathetic denervation in Lewy body disease (LBD), including Parkinson’s disease (PD) and dementia with Lewy bodies (DLB). The heart-to-mediastinum ratio (H/M) in PD and DLB is significantly lower than that in Parkinson’s plus syndromes and Alzheimer’s disease. Although this ratio is useful for distinguishing LBD from non-LBD, it fluctuates depending on the system performance of the gamma cameras. Therefore, a new, simple quantification method using 123I-MIBG uptake analysis is required for clinical study. The purpose of this study was to develop a new uptake index with a simple protocol to determine 123I-MIBG uptake on planar images. Methods: The 123I-MIBG input function was obtained from the input counts of the pulmonary artery (PA), which were assessed by analyzing the PA time-activity curves. The heart region of interest used for determining the H/M was used for calculating the uptake index, which was obtained by dividing the heart count by the input count. Results: Forty-eight patients underwent 123I-MIBG chest angiography and planar imaging, after clinical feature assessment and tracer injection. The H/M and 123I-MIBG uptake index were calculated and correlated with clinical features. Values for LBD were significantly lower than those for non-LBD in all analyses (P<0.001). The overlapping ranges between non-LBD and LBD were 2.15 to 2.49 in the H/M method, and 1.04 to 1.22% in the uptake index method. The diagnostic accuracy of the uptake index (area under the curve (AUC), 0.98; sensitivity, 96%; specificity, 91%; positive predictive value (PPV), 90%; negative predictive value (NPV), 93%; and accuracy, 92%) was approximately equal to that of the H/M (AUC, 0.95; sensitivity, 93%; specificity, 91%; PPV, 90%; NPV, 93%; and accuracy, 92%) for discriminating patients with LBD and non-LBD. Conclusion: A simple uptake index method was developed using 123I-MIBG planar imaging and the input counts determined by analyzing chest radioisotope angiography images of the PA. The diagnostic accuracy of the uptake index was approximately equal to that of the H/M for discriminating patients with LBD and non-LBD. PMID:28840137

  10. A Simulation Study Comparing Incineration and Composting in a Mars-Based Advanced Life Support System

    NASA Technical Reports Server (NTRS)

    Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).

  11. The Protected Areas Visitor Impact Management (PAVIM) framework: A simplified process for making management decisions

    USGS Publications Warehouse

    Farrell, T.A.; Marion, J.L.

    2002-01-01

    Ecotourism and protected area visitation in Central and South America have resulted in ecological impacts, which some protected areas managers have addressed by employing visitor impact management frameworks. In this paper, we propose the Protected Area Visitor Impact Management (PAVIM) framework as an alternative to carrying capacity and other frameworks such as Limits of Acceptable Change. We use a set of evaluation criteria to compare the relative positive and negative attributes of carrying capacity, other decision-making frameworks and the new framework, within the context of their actual and potential use in Central and South America. Positive attributes of PAVIM include simplicity, flexibility, cost effectiveness, timeliness, and incorporating input from stakeholders and local residents. Negative attributes include diminished objectivity and cultural sensitivity issues. Further research and application of PAVIM are recommended.

  12. Information quality-control model

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1971-01-01

    Model serves as graphic tool for estimating complete product objectives from limited input information, and is applied to cost estimations, product-quality evaluations, and effectiveness measurements for manpower resources allocation. Six product quality levels are defined.

  13. Parents' Translations of Child Gesture Facilitate Word Learning in Children with Autism, Down Syndrome and Typical Development.

    PubMed

    Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B

    2016-01-01

    Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al. in Dev Sci 10(6):778-785, 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with AU, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders.

  14. SNRB{trademark} air toxics monitoring. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-01-01

    Babcock & Wilcox (B&W) is currently conducting a project under the DOE`s Clean Coal Technology (CCT II) Program to demonstrate its SO{sub x}NO{sub x}-Rox Box{trademark} (SNRB{trademark}) process in a 5 MWe Field Demonstration Unit at Ohio Edison`s R. E. Burger Plant near Shadyside, Ohio. The objective of the SNRB{trademark} Air Toxics Monitoring Project was to provide data on SNRB{trademark} air toxics emissions control performance to B&W and to add to the DOE/EPRI/EPA data base by quantifying the flow rates of selected hazardous substances (or air toxics) in all of the major input and output streams of the SNRB{trademark} process asmore » well as the power plant. Work under the project included the collection and analysis of representative samples of all major input and output streams of the SNRB{trademark} demonstration unit and the power plant, and the subsequent laboratory analysis of these samples to determine the partitioning of the hazardous substances between the various process streams. Material balances for selected air toxics were subsequently calculated around the SNRB{trademark} and host boiler systems, including the removal efficiencies across each of the major air pollution control devices. This report presents results of the SNRB{trademark} Air Toxics Monitoring Project. In addition to the Introduction, a brief description of the test site, including the Boiler No. 8 and the SNRB{trademark} process, is included in Section H. The concentrations of air toxic emissions are presented in Section II according to compound class. Material balances are included in Section IV for three major systems: boiler, electrostatic precipitator, and SNRB{trademark}. Emission factors and removal efficiencies are also presented according to compound class in Sections V and VI, respectively. A data evaluation is provided in Section VII.« less

  15. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  16. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Lee, F. C.

    1984-01-01

    Problems caused by input filter interaction and conventional input filter design techniques are discussed. The concept of feedforward control is modeled with an input filter and a buck regulator. Experimental measurement and comparison to the analytical predictions is carried out. Transient response and the use of a feedforward loop to stabilize the regulator system is described. Other possible applications for feedforward control are included.

  17. Transformation priming helps to disambiguate sudden changes of sensory inputs.

    PubMed

    Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen

    2015-11-01

    Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  19. Manufacturing Planning Guide

    NASA Technical Reports Server (NTRS)

    Waid, Michael

    2011-01-01

    Manufacturing process, milestones and inputs are unknowns to first-time users of the manufacturing facilities. The Manufacturing Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their project engineering personnel in manufacturing planning and execution. Material covered includes a roadmap of the manufacturing process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, products, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.

  20. Enhanced capabilities and modified users manual for axial-flow compressor conceptual design code CSPAN

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Lavelle, Thomas M.

    1995-01-01

    Modifications made to the axial-flow compressor conceptual design code CSPAN are documented in this report. Endwall blockage and stall margin predictions were added. The loss-coefficient model was upgraded. Default correlations for rotor and stator solidity and aspect-ratio inputs and for stator-exit tangential velocity inputs were included in the code along with defaults for aerodynamic design limits. A complete description of input and output along with sample cases are included.

  1. Local area network with fault-checking, priorities, and redundant backup

    NASA Technical Reports Server (NTRS)

    Morales, Sergio (Inventor); Friedman, Gary L. (Inventor)

    1989-01-01

    This invention is a redundant error detecting and correcting local area networked computer system having a plurality of nodes each including a network connector board within the node for connecting to an interfacing transceiver operably attached to a network cable. There is a first network cable disposed along a path to interconnect the nodes. The first network cable includes a plurality of first interfacing transceivers attached thereto. A second network cable is disposed in parallel with the first cable and, in like manner, includes a plurality of second interfacing transceivers attached thereto. There are a plurality of three position switches each having a signal input, three outputs for individual selective connection to the input, and a control input for receiving signals designating which of the outputs is to be connected to the signal input. Each of the switches includes means for designating a response address for responding to addressed signals appearing at the control input and each of the switches further has its signal input connected to a respective one of the input/output lines from the nodes. Also, one of the three outputs is connected to a repective one of the plurality of first interfacing transceivers. There is master switch control means having an output connected to the control inputs of the plurality of three position switches and an input for receiving directive signals for outputting addressed switch position signals to the three position switches as well as monitor and control computer means having a pair of network connector boards therein connected to respective ones of one of the first interfacing transceivers and one of the second interfacing transceivers and an output connected to the input of the master switch means for monitoring the status of the networked computer system by sending messages to the nodes and receiving and verifying messages therefrom and for sending control signals to the master switch to cause the master switch to cause respective ones of the nodes to use a desired one of the first and second cables for transmitting and receiving messages and for disconnecting desired ones of the nodes from both cables.

  2. Window-closing safety system

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    A safety device includes a wire loop embedded in the glass of a passenger car window and routed near the closing leading-edge of the window. The wire loop carries microwave pulses around the loop to and from a transceiver with separate output and input ports. An evanescent field only and inch or two in radius is created along the wire loop by the pulses. Just about any object coming within the evanescent field will dramatically reduce the energy of the microwave pulses received back by the transceiver. Such a loss in energy is interpreted as a closing area blockage, and electrical interlocks are provided to halt or reverse a power window motor that is actively trying to close the window.

  3. Window-closing safety system

    DOEpatents

    McEwan, T.E.

    1997-08-26

    A safety device includes a wire loop embedded in the glass of a passenger car window and routed near the closing leading-edge of the window. The wire loop carries microwave pulses around the loop to and from a transceiver with separate output and input ports. An evanescent field only an inch or two in radius is created along the wire loop by the pulses. Just about any object coming within the evanescent field will dramatically reduce the energy of the microwave pulses received back by the transceiver. Such a loss in energy is interpreted as a closing area blockage, and electrical interlocks are provided to halt or reverse a power window motor that is actively trying to close the window. 5 figs.

  4. Use of artificial neural networks on optical track width measurements.

    PubMed

    Smith, Richard J; See, Chung W; Somekh, Mike G; Yacoot, Andrew

    2007-08-01

    We have demonstrated recently that, by using an ultrastable optical interferometer together with artificial neural networks (ANNs), track widths down to 60 nm can be measured with a 0.3 NA objective lens. We investigate the effective conditions for training ANNs. Experimental results will be used to show the characteristics of the training samples and the data format of the ANN inputs required to produce suitably trained ANNs. Results obtained with networks measuring double tracks, and classifying different structures, will be presented to illustrate the capability of the technique. We include a discussion on expansion of the application areas of the system, allowing it to be used as a general purpose instrument.

  5. Use of artificial neural networks on optical track width measurements

    NASA Astrophysics Data System (ADS)

    Smith, Richard J.; See, Chung W.; Somekh, Mike G.; Yacoot, Andrew

    2007-08-01

    We have demonstrated recently that, by using an ultrastable optical interferometer together with artificial neural networks (ANNs), track widths down to 60 nm can be measured with a 0.3 NA objective lens. We investigate the effective conditions for training ANNs. Experimental results will be used to show the characteristics of the training samples and the data format of the ANN inputs required to produce suitably trained ANNs. Results obtained with networks measuring double tracks, and classifying different structures, will be presented to illustrate the capability of the technique. We include a discussion on expansion of the application areas of the system, allowing it to be used as a general purpose instrument.

  6. Control technology for future aircraft propulsion systems

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Szuch, J. R.; Merrill, W. C.; Lehtinen, B.; Soeder, J. F.

    1984-01-01

    The need for a more sophisticated engine control system is discussed. The improvements in better thrust-to-weight ratios demand the manipulation of more control inputs. New technological solutions to the engine control problem are practiced. The digital electronic engine control (DEEC) system is a step in the evolution to digital electronic engine control. Technology issues are addressed to ensure a growth in confidence in sophisticated electronic controls for aircraft turbine engines. The need of a control system architecture which permits propulsion controls to be functionally integrated with other aircraft systems is established. Areas of technology studied include: (1) control design methodology; (2) improved modeling and simulation methods; and (3) implementation technologies. Objectives, results and future thrusts are summarized.

  7. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  8. Ceramic Technology for Advanced Heat Engines Project. Semiannual progress report, October 1984-March 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-09-01

    A five-year project plan was developed with extensive input from private industry. The objective of the project is to develop the industrial technology base required for reliable ceramics for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barrier and wear applicationsmore » in these engines.« less

  9. Ceramic technology for advanced heat engines project: Semiannual progress report for April through September 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-03-01

    An assessment of needs was completed, and a five-year project plan was developed with extensive input from private industry. Objective is to develop the industrial technology base required for reliable ceramics for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barriermore » and wear applications in these engines.« less

  10. Aeroservoelastic wind-tunnel investigations using the Active Flexible Wing Model: Status and recent accomplishments

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.; Perry, Boyd, III; Tiffany, Sherwood H.; Cole, Stanley R.; Buttrill, Carey S.; Adams, William M., Jr.; Houck, Jacob A.; Srinathkumar, S.; Mukhopadhyay, Vivek; Pototzky, Anthony S.

    1989-01-01

    The status of the joint NASA/Rockwell Active Flexible Wing Wind-Tunnel Test Program is described. The objectives are to develop and validate the analysis, design, and test methodologies required to apply multifunction active control technology for improving aircraft performance and stability. Major tasks include designing digital multi-input/multi-output flutter-suppression and rolling-maneuver-load alleviation concepts for a flexible full-span wind-tunnel model, obtaining an experimental data base for the basic model and each control concept and providing comparisons between experimental and analytical results to validate the methodologies. The opportunity is provided to improve real-time simulation techniques and to gain practical experience with digital control law implementation procedures.

  11. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  12. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  13. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  14. Thermal effects in the Input Optics of the Enhanced Laser Interferometer Gravitational-Wave Observatory interferometers.

    PubMed

    Dooley, Katherine L; Arain, Muzammil A; Feldbaum, David; Frolov, Valery V; Heintze, Matthew; Hoak, Daniel; Khazanov, Efim A; Lucianetti, Antonio; Martin, Rodica M; Mueller, Guido; Palashov, Oleg; Quetschke, Volker; Reitze, David H; Savage, R L; Tanner, D B; Williams, Luke F; Wu, Wan

    2012-03-01

    We present the design and performance of the LIGO Input Optics subsystem as implemented for the sixth science run of the LIGO interferometers. The Initial LIGO Input Optics experienced thermal side effects when operating with 7 W input power. We designed, built, and implemented improved versions of the Input Optics for Enhanced LIGO, an incremental upgrade to the Initial LIGO interferometers, designed to run with 30 W input power. At four times the power of Initial LIGO, the Enhanced LIGO Input Optics demonstrated improved performance including better optical isolation, less thermal drift, minimal thermal lensing, and higher optical efficiency. The success of the Input Optics design fosters confidence for its ability to perform well in Advanced LIGO.

  15. Hybrid zero-voltage switching (ZVS) control for power inverters

    DOEpatents

    Amirahmadi, Ahmadreza; Hu, Haibing; Batarseh, Issa

    2016-11-01

    A power inverter combination includes a half-bridge power inverter including first and second semiconductor power switches receiving input power having an intermediate node therebetween providing an inductor current through an inductor. A controller includes input comparison circuitry receiving the inductor current having outputs coupled to first inputs of pulse width modulation (PWM) generation circuitry, and a predictive control block having an output coupled to second inputs of the PWM generation circuitry. The predictive control block is coupled to receive a measure of Vin and an output voltage at a grid connection point. A memory stores a current control algorithm configured for resetting a PWM period for a switching signal applied to control nodes of the first and second power switch whenever the inductor current reaches a predetermined upper limit or a predetermined lower limit.

  16. Monument Damage Information System (mondis): AN Ontological Approach to Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Cacciotti, R.; Valach, J.; Kuneš, P.; Čerňanský, M.; Blaško, M.; Křemen, P.

    2013-07-01

    Deriving from the complex nature of cultural heritage conservation it is the need for enhancing a systematic but flexible organization of expert knowledge in the field. Such organization should address comprehensively the interrelations and complementariness among the different factors that come into play in the understanding of diagnostic and intervention problems. The purpose of MONDIS is to endorse this kind of organization. The approach consists in applying an ontological representation to the field of heritage conservation in order to establish an appropriate processing of data. The system allows replicating in a computer readable form the basic dependence among factors influencing the description, diagnosis and intervention of damages to immovable objects. More specifically MONDIS allows to input and search entries concerning object description, structural evolution, location characteristics and risk, component, material properties, surveys and measurements, damage typology, damage triggering events and possible interventions. The system supports searching features typical of standard databases, as it allows for the digitalization of a wide range of information including professional reports, books, articles and scientific papers. It also allows for computer aided retrieval of information tailored to user's requirements. The foreseen outputs will include a web user interface and a mobile application for visual inspection purposes.

  17. PESTAN: Pesticide Analytical Model Version 4.0 User's Guide

    EPA Pesticide Factsheets

    The principal objective of this User's Guide to provide essential information on the aspects such as model conceptualization, model theory, assumptions and limitations, determination of input parameters, analysis of results and sensitivity analysis.

  18. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  19. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  20. Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs

    PubMed Central

    Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie

    2017-01-01

    Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582

  1. Silicon photonics thermal phase shifter with reduced temperature range

    DOEpatents

    Lentine, Anthony L; Kekatpure, Rohan D; DeRose, Christopher; Davids, Paul; Watts, Michael R

    2013-12-17

    Optical devices, phased array systems and methods of phase-shifting an input signal are provided. An optical device includes a microresonator and a waveguide for receiving an input optical signal. The waveguide includes a segment coupled to the microresonator with a coupling coefficient such that the waveguide is overcoupled to the microresonator. The microresonator received the input optical signal via the waveguide and phase-shifts the input optical signal to form an output optical signal. The output optical signal is coupled into the waveguide via the microresonator and transmitted by the waveguide. At an operating point of the optical device, the coupling coefficient is selected to reduce a change in an amplitude of the output optical signal and to increase a change in a phase of the output optical signal, relative to the input optical signal.

  2. Petroleum Market Model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcoholsmore » and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.« less

  3. Exploring Venus: the Venus Exploration Analysis Group (VEXAG)

    NASA Astrophysics Data System (ADS)

    Ocampo, A.; Atreya, S.; Thompson, T.; Luhmann, J.; Mackwell, S.; Baines, K.; Cutts, J.; Robinson, J.; Saunders, S.

    In July 2005 NASA s Planetary Division established the Venus Exploration Analysis Group VEXAG http www lpi usra edu vexag in order to engage the scientific community at large in identifying scientific priorities and strategies for the exploration of Venus VEXAG is a community-based forum open to all interested in the exploration of Venus VEXAG was designed to provide scientific input and technology development plans for planning and prioritizing the study of Venus over the next several decades including a Venus surface sample return VEXAG regularly evaluates NASA s Venus exploration goals scientific objectives investigations and critical measurement requirements including the recommendations in the National Research Council Decadal Survey and NASA s Solar System Exploration Strategic Roadmap VEXAG will take into consideration the latest scientific results from ESA s Venus Express mission and the MESSENGER flybys as well as the results anticipated from JAXA s Venus Climate Orbiter together with science community inputs from venues such as the February 13-16 2006 AGU Chapman Conference to identify the scientific priorities and strategies for future NASA Venus exploration VEXAG is composed of two co-chairs Sushil Atreya University of Michigan Ann Arbor and Janet Luhmann University of California Berkeley VEXAG has formed three focus groups in the areas of 1 Planetary Formation and Evolution Surface and Interior Volcanism Geodynamics etc Focus Group Lead Steve Mackwell LPI 2 Atmospheric Evolution Dynamics Meteorology

  4. Persistent Identification of Agents and Objects of Global Change

    NASA Astrophysics Data System (ADS)

    Tilmes, C.; Fox, P. A.; Waple, A.; Zednik, S.

    2012-12-01

    "Global Change" includes climate change, ecological change, land-use changes and host of other interacting complex systems including societal and institutional implications. This vast body of information includes scientific research, data, measurements, models, analyses, assessments, etc. It is produced by a collection of multi-disciplinary researchers and organizations from around the world and demand for this information is increasing from a multitude of different audiences and stakeholders. The identification and organization of the agents and objects of global change information and their inter-relationships and contributions to the whole story of change is critical for conveying the state of knowledge, its complexity as well as syntheses and key messages to researchers, decision makers, and the public. The U.S. Global Change Research Program (http://globalchange.gov) coordinates and integrates federal research on changes in the global environment and their implications for society. The USGCRP is developing a Global Change Information System (GCIS) that will organize and present our best understanding of global change, and all the contributing information that leads to that understanding, including the provenance needed to trust and use that information. The first implementation will provide provenance for the National Climate Assessment (NCA). (http://assessment.globalchange.gov) The NCA must integrate, evaluate, and interpret the findings of the USGCRP; analyze the effects of global change on the natural environment, agriculture, energy production and use, land and water resources, transportation, human health and welfare, human social systems, and biological diversity; and analyze current trends in global change, both human-induced and natural, and projects major trends for the subsequent 25 to 100 years. It also assesses information at the regional scale across the Nation. A synthesis report is required not less frequently than every four years and the next NCA report will be delivered in 2013. However a major new approach for the NCA is as a sustained effort including many more foundational components (such as scenarios and indicators) and thousands of contributors and participants. As a result of a public "request for information" the NCA has received over 500 distinct technical inputs to the process, many of which are reports distilling and synthesizing even more information, coming from thousands of groups around the federal government, non-governmental organizations, academic institutions, etc. The GCIS will assign identifiers, track citations and provide the links from the content of the National Climate Assessment back to related inputs. We will describe our approach to persistent identification of the agents and objects and their relationships to the NCA, how we plan to implement that approach throughout the global change research and sustained assessment activities of the 13 federal agencies of the USGCRP, and how this approach will improve understanding, reproducibility, and ultimately, credibility and usability of global change information.

  5. Liquid cooled data center design selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.

    Input data, specifying aspects of a thermal design of a liquid cooled data center, is obtained. The input data includes data indicative of ambient outdoor temperature for a location of the data center; and/or data representing workload power dissipation for the data center. The input data is evaluated to obtain performance of the data center thermal design. The performance includes cooling energy usage; and/or one pertinent temperature associated with the data center. The performance of the data center thermal design is output.

  6. Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach

    NASA Technical Reports Server (NTRS)

    Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan

    2015-01-01

    This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves tens of thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach to developing scenarios and examples of first test runs.

  7. A study of optimal model lag and spatial inputs to artificial neural network for rainfall forecasting

    NASA Astrophysics Data System (ADS)

    Luk, K. C.; Ball, J. E.; Sharma, A.

    2000-01-01

    Artificial neural networks (ANNs), which emulate the parallel distributed processing of the human nervous system, have proven to be very successful in dealing with complicated problems, such as function approximation and pattern recognition. Due to their powerful capability and functionality, ANNs provide an alternative approach for many engineering problems that are difficult to solve by conventional approaches. Rainfall forecasting has been a difficult subject in hydrology due to the complexity of the physical processes involved and the variability of rainfall in space and time. In this study, ANNs were adopted to forecast short-term rainfall for an urban catchment. The ANNs were trained to recognise historical rainfall patterns as recorded from a number of gauges in the study catchment for reproduction of relevant patterns for new rainstorm events. The primary objective of this paper is to investigate the effect of temporal and spatial information on short-term rainfall forecasting. To achieve this aim, a comparison test on the forecast accuracy was made among the ANNs configured with different orders of lag and different numbers of spatial inputs. In developing the ANNs with alternative configurations, the ANNs were trained to an optimal level to achieve good generalisation of data. It was found in this study that the ANNs provided the most accurate predictions when an optimum number of spatial inputs was included into the network, and that the network with lower lag consistently produced better performance.

  8. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  9. Reconfigurable Drive Current System

    NASA Technical Reports Server (NTRS)

    Alhorn, Dean C. (Inventor); Dutton, Kenneth R. (Inventor); Howard, David E. (Inventor); Smith, Dennis A. (Inventor)

    2017-01-01

    A reconfigurable drive current system includes drive stages, each of which includes a high-side transistor and a low-side transistor in a totem pole configuration. A current monitor is coupled to an output of each drive stage. Input channels are provided to receive input signals. A processor is coupled to the input channels and to each current monitor for generating at least one drive signal using at least one of the input signals and current measured by at least one of the current monitors. A pulse width modulation generator is coupled to the processor and each drive stage for varying the drive signals as a function of time prior to being supplied to at least one of the drive stages.

  10. TIM Version 3.0 beta Technical Description and User Guide - Appendix B - Example input file for TIMv3.0

    EPA Pesticide Factsheets

    Terrestrial Investigation Model, TIM, has several appendices to its user guide. This is the appendix that includes an example input file in its preserved format. Both parameters and comments defining them are included.

  11. Numerical Analysis Objects

    NASA Astrophysics Data System (ADS)

    Henderson, Michael

    1997-08-01

    The Numerical Analysis Objects project (NAO) is a project in the Mathematics Department of IBM's TJ Watson Research Center. While there are plenty of numerical tools available today, it is not an easy task to combine them into a custom application. NAO is directed at the dual problems of building applications from a set of tools, and creating those tools. There are several "reuse" projects, which focus on the problems of identifying and cataloging tools. NAO is directed at the specific context of scientific computing. Because the type of tools is restricted, problems such as tools with incompatible data structures for input and output, and dissimilar interfaces to tools which solve similar problems can be addressed. The approach we've taken is to define interfaces to those objects used in numerical analysis, such as geometries, functions and operators, and to start collecting (and building) a set of tools which use these interfaces. We have written a class library (a set of abstract classes and implementations) in C++ which demonstrates the approach. Besides the classes, the class library includes "stub" routines which allow the library to be used from C or Fortran, and an interface to a Visual Programming Language. The library has been used to build a simulator for petroleum reservoirs, using a set of tools for discretizing nonlinear differential equations that we have written, and includes "wrapped" versions of packages from the Netlib repository. Documentation can be found on the Web at "http://www.research.ibm.com/nao". I will describe the objects and their interfaces, and give examples ranging from mesh generation to solving differential equations.

  12. Implications of Preference and Problem Formulation on the Operating Policies of Complex Multi-Reservoir Systems

    NASA Astrophysics Data System (ADS)

    Quinn, J.; Reed, P. M.; Giuliani, M.; Castelletti, A.

    2016-12-01

    Optimizing the operations of multi-reservoir systems poses several challenges: 1) the high dimension of the problem's states and controls, 2) the need to balance conflicting multi-sector objectives, and 3) understanding how uncertainties impact system performance. These difficulties motivated the development of the Evolutionary Multi-Objective Direct Policy Search (EMODPS) framework, in which multi-reservoir operating policies are parameterized in a given family of functions and then optimized for multiple objectives through simulation over a set of stochastic inputs. However, properly framing these objectives remains a severe challenge and a neglected source of uncertainty. Here, we use EMODPS to optimize operating policies for a 4-reservoir system in the Red River Basin in Vietnam, exploring the consequences of optimizing to different sets of objectives related to 1) hydropower production, 2) meeting multi-sector water demands, and 3) providing flood protection to the capital city of Hanoi. We show how coordinated operation of the reservoirs can differ markedly depending on how decision makers weigh these concerns. Moreover, we illustrate how formulation choices that emphasize the mean, tail, or variability of performance across objective combinations must be evaluated carefully. Our results show that these choices can significantly improve attainable system performance, or yield severe unintended consequences. Finally, we show that satisfactory validation of the operating policies on a set of out-of-sample stochastic inputs depends as much or more on the formulation of the objectives as on effective optimization of the policies. These observations highlight the importance of carefully considering how we abstract stakeholders' objectives and of iteratively optimizing and visualizing multiple problem formulation hypotheses to ensure that we capture the most important tradeoffs that emerge from different stakeholder preferences.

  13. supernovae: Photometric classification of supernovae

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Moss, Adam

    2017-05-01

    Supernovae classifies supernovae using their light curves directly as inputs to a deep recurrent neural network, which learns information from the sequence of observations. Observational time and filter fluxes are used as inputs; since the inputs are agnostic, additional data such as host galaxy information can also be included.

  14. Comparative case study between D3 and highcharts on lustre data visualization

    NASA Astrophysics Data System (ADS)

    ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott

    2013-12-01

    One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).

  15. [Robots and intellectual property].

    PubMed

    Larrieu, Jacques

    2013-12-01

    This topic is part of the global issue concerning the necessity to adapt intellectual property law to constant changes in technology. The relationship between robots and IP is dual. On one hand, the robots may be regarded as objects of intellectual property. A robot, like any new machine, could qualify for a protection by a patent. A copyright may protect its appearance if it is original. Its memory, like a database, could be covered by a sui generis right. On the other hand, the question of the protection of the outputs of the robot must be raised. The robots, as the physical embodiment of artificial intelligence, are becoming more and more autonomous. Robot-generated works include less and less human inputs. Are these objects created or invented by a robot copyrightable or patentable? To whom the ownership of these IP rights will be allocated? To the person who manufactured the machine ? To the user of the robot? To the robot itself? All these questions are worth discussing.

  16. Object-based detection of vehicles using combined optical and elevation data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2018-02-01

    The detection of vehicles is an important and challenging topic that is relevant for many applications. In this work, we present a workflow that utilizes optical and elevation data to detect vehicles in remotely sensed urban data. This workflow consists of three consecutive stages: candidate identification, classification, and single vehicle extraction. Unlike in most previous approaches, fusion of both data sources is strongly pursued at all stages. While the first stage utilizes the fact that most man-made objects are rectangular in shape, the second and third stages employ machine learning techniques combined with specific features. The stages are designed to handle multiple sensor input, which results in a significant improvement. A detailed evaluation shows the benefits of our workflow, which includes hand-tailored features; even in comparison with classification approaches based on Convolutional Neural Networks, which are state of the art in computer vision, we could obtain a comparable or superior performance (F1 score of 0.96-0.94).

  17. Design and optimization of input shapers for liquid slosh suppression

    NASA Astrophysics Data System (ADS)

    Aboel-Hassan, Ameen; Arafa, Mustafa; Nassef, Ashraf

    2009-02-01

    The need for fast maneuvering and accurate positioning of flexible structures poses a control challenge. The inherent flexibility in these lightly damped systems creates large undesirable residual vibrations in response to rapid excitations. Several control approaches have been proposed to tackle this class of problems, of which the input shaping technique is appealing in many aspects. While input shaping has been widely investigated to attenuate residual vibrations in flexible structures, less attention was granted to expand its viability in further applications. The aim of this work is to develop a methodology for applying input shaping techniques to suppress sloshing effects in open moving containers to facilitate safe and fast point-to-point movements. The liquid behavior is modeled using finite element analysis. The input shaper parameters are optimized to find the commands that would result in minimum residual vibration. Other objectives, such as improved robustness, and motion constraints such as deflection limiting are also addressed in the optimization scheme. Numerical results are verified on an experimental setup consisting of a small motor-driven water tank undergoing rectilinear motion, while measuring both the tank motion and free surface displacement of the water. The results obtained suggest that input shaping is an effective method for liquid slosh suppression.

  18. Optimal Control for Aperiodic Dual-Rate Systems With Time-Varying Delays

    PubMed Central

    Salt, Julián; Guinaldo, María; Chacón, Jesús

    2018-01-01

    In this work, we consider a dual-rate scenario with slow input and fast output. Our objective is the maximization of the decay rate of the system through the suitable choice of the n-input signals between two measures (periodic sampling) and their times of application. The optimization algorithm is extended for time-varying delays in order to make possible its implementation in networked control systems. We provide experimental results in an air levitation system to verify the validity of the algorithm in a real plant. PMID:29747441

  19. Use and Design of an Active Data Dictionary for Local Validation of Input Data.

    DTIC Science & Technology

    1985-03-01

    tl,--:selves ir tl~ce si tuations aire fro ~io-ntly lof~t to tlhcir ohfl Ipvices, an-. they aust 1.velo)p the ir own :r.tL i-,5 for v datinXL. inputs...progirams. Feijarltless off orijin, indocur.ite data aIre j’oi.;O.- in any -NTP ;-,,steni. Informa tion create.1 from inaccurite !Lta also tends to he...cLiect name, shcort name, syn%3nym or allases, source, narrative description, records/ filos that lise cr contain the lata object, data structure

  20. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a neural network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. The neural network technique works well not only as an interpolating device but also as an extrapolating device to achieve blade designs from a given database. Two validating test cases are discussed.

Top