NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.
A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.
2002-01-01
This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Derringer, Cory; Rottman, Benjamin Margolin
2018-05-01
Four experiments tested how people learn cause-effect relations when there are many possible causes of an effect. When there are many cues, even if all the cues together strongly predict the effect, the bivariate relation between each individual cue and the effect can be weak, which can make it difficult to detect the influence of each cue. We hypothesized that when detecting the influence of a cue, in addition to learning from the states of the cues and effect (e.g., a cue is present and the effect is present), which is hypothesized by multiple existing theories of learning, participants would also learn from transitions - how the cues and effect change over time (e.g., a cue turns on and the effect turns on). We found that participants were better able to identify positive and negative cues in an environment in which only one cue changed from one trial to the next, compared to multiple cues changing (Experiments 1A, 1B). Within a single learning sequence, participants were also more likely to update their beliefs about causal strength when one cue changed at a time ('one-change transitions') than when multiple cues changed simultaneously (Experiment 2). Furthermore, learning was impaired when the trials were grouped by the state of the effect (Experiment 3) or when the trials were grouped by the state of a cue (Experiment 4), both of which reduce the number of one-change transitions. We developed a modification of the Rescorla-Wagner algorithm to model this 'Informative Transitions' learning processes. Copyright © 2018 Elsevier Inc. All rights reserved.
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
Spatiotemporal brain dynamics underlying attentional bias modifications.
Sallard, Etienne; Hartmann, Lea; Ptak, Radek; Spierer, Lucas
2018-06-05
Exaggerated attentional biases toward specific elements of the environment contribute to the maintenance of several psychiatric conditions, such as biases to threatening faces in social anxiety. Although recent literature indicates that attentional bias modification may constitute an effective approach for psychiatric remediation, the underlying neurophysiological mechanisms remain unclear. We addressed this question by recording EEG in 24 healthy participants performing a modified dot-probe task in which pairs of neutral cues (colored shapes) were replaced by probe stimuli requiring a discrimination judgment. To induce an attentional bias toward or away from the cues, the probes were systematically presented either at the same or at the opposite position of a specific cue color. This paradigm enabled participants to spontaneously develop biases to initially unbiased, neutral cues, as measured by the response speed to the probe presented after the cues. Behavioral result indicated that the ABM procedure induced approach and avoidance biases. The influence of ABM on inhibitory control was assessed in a separated Go/NoGo task: Changes in AB did not influence participants' capacity to inhibit their responses to the cues. Attentional bias modification was associated with a topographic modulation of event-related potentials already 50-84 ms following the onset of the cues. Statistical analyses of distributed electrical source estimations revealed that the development of attentional biases was associated with decreased activity in the left temporo-parieto-occipital junction. These findings suggest that attentional bias modification affects early sensory processing phases related to the extraction of information based on stimulus saliency. Copyright © 2017. Published by Elsevier B.V.
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less
Two cloud-based cues for estimating scene structure and camera calibration.
Jacobs, Nathan; Abrams, Austin; Pless, Robert
2013-10-01
We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.
Rabinovitz, Sharon; Nagar, Maayan
2015-10-01
Cognitive biases have previously been recognized as key mechanisms that contribute to the development, maintenance, and relapse of addictive behaviors. The same mechanisms have been recently found in problematic computer gaming. The present study aims to investigate whether excessive massively multiplayer online role-playing gamers (EG) demonstrate an approach bias toward game-related cues compared to neutral stimuli; to test whether these automatic action tendencies can be implicitly modified in a single session training; and to test whether this training affects game urges and game-seeking behavior. EG (n=38) were randomly assigned to a condition in which they were implicitly trained to avoid or to approach gaming cues by pushing or pulling a joystick, using a computerized intervention (cognitive bias modification via the Approach Avoidance Task). EG demonstrated an approach bias for gaming cues compared with neutral, movie cues. Single session training significantly decreased automatic action tendencies to approach gaming cues. These effects occurred outside subjective awareness. Furthermore, approach bias retraining reduced subjective urges and intentions to play, as well as decreased game-seeking behavior. Retraining automatic processes may be beneficial in changing addictive impulses in EG. Yet, large-scale trials and long-term follow-up are warranted. The results extend the application of cognitive bias modification from substance use disorders to behavioral addictions, and specifically to Internet gaming disorder. Theoretical implications are discussed.
Cooke, Martin; Aubanel, Vincent
2017-01-01
Algorithmic modifications to the durational structure of speech designed to avoid intervals of intense masking lead to increases in intelligibility, but the basis for such gains is not clear. The current study addressed the possibility that the reduced information load produced by speech rate slowing might explain some or all of the benefits of durational modifications. The study also investigated the influence of masker stationarity on the effectiveness of durational changes. Listeners identified keywords in sentences that had undergone linear and nonlinear speech rate changes resulting in overall temporal lengthening in the presence of stationary and fluctuating maskers. Relative to unmodified speech, a slower speech rate produced no intelligibility gains for the stationary masker, suggesting that a reduction in information rate does not underlie intelligibility benefits of durationally modified speech. However, both linear and nonlinear modifications led to substantial intelligibility increases in fluctuating noise. One possibility is that overall increases in speech duration provide no new phonetic information in stationary masking conditions, but that temporal fluctuations in the background increase the likelihood of glimpsing additional salient speech cues. Alternatively, listeners may have benefitted from an increase in the difference in speech rates between the target and background. PMID:28618803
A novel speech processing algorithm based on harmonicity cues in cochlear implant
NASA Astrophysics Data System (ADS)
Wang, Jian; Chen, Yousheng; Zhang, Zongping; Chen, Yan; Zhang, Weifeng
2017-08-01
This paper proposed a novel speech processing algorithm in cochlear implant, which used harmonicity cues to enhance tonal information in Mandarin Chinese speech recognition. The input speech was filtered by a 4-channel band-pass filter bank. The frequency ranges for the four bands were: 300-621, 621-1285, 1285-2657, and 2657-5499 Hz. In each pass band, temporal envelope and periodicity cues (TEPCs) below 400 Hz were extracted by full wave rectification and low-pass filtering. The TEPCs were modulated by a sinusoidal carrier, the frequency of which was fundamental frequency (F0) and its harmonics most close to the center frequency of each band. Signals from each band were combined together to obtain an output speech. Mandarin tone, word, and sentence recognition in quiet listening conditions were tested for the extensively used continuous interleaved sampling (CIS) strategy and the novel F0-harmonic algorithm. Results found that the F0-harmonic algorithm performed consistently better than CIS strategy in Mandarin tone, word, and sentence recognition. In addition, sentence recognition rate was higher than word recognition rate, as a result of contextual information in the sentence. Moreover, tone 3 and 4 performed better than tone 1 and tone 2, due to the easily identified features of the former. In conclusion, the F0-harmonic algorithm could enhance tonal information in cochlear implant speech processing due to the use of harmonicity cues, thereby improving Mandarin tone, word, and sentence recognition. Further study will focus on the test of the F0-harmonic algorithm in noisy listening conditions.
Lu, Hang; McComas, Katherine A; Besley, John C
2017-01-01
Genetic modification (GM) of crops and climate change are arguably two of today's most challenging science communication issues. Increasingly, these two issues are connected in messages proposing GM as a viable option for ensuring global food security threatened by climate change. This study examines the effects of messages promoting the benefits of GM in the context of climate change. Further, it examines whether explicit reference to "climate change," or "global warming" in a GM message results in different effects than each other, or an implicit climate reference. An online sample of U.S. participants (N = 1050) were randomly assigned to one of four conditions: "climate change" cue, "global warming" cue, implicit cue, or control (no message). Generally speaking, framing GM crops as a way to help ensure global food security proved to be an effective messaging strategy in increasing positive attitudes toward GM. In addition, the implicit cue condition led to liberals having more positive attitudes and behavioral intentions toward GM than the "climate change" cue condition, an effect mediated by message evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.
2016-05-01
Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients
ERIC Educational Resources Information Center
Meister, Hartmut; Fürsen, Katrin; Streicher, Barbara; Lang-Roth, Ruth; Walger, Martin
2016-01-01
Purpose: The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types. Method: Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point…
Attwood, Angela S; Williams, Tim; Adams, Sally; McClernon, Francis J; Munafò, Marcus R
2014-10-07
Smoking-related cues can trigger drug-seeking behaviors, and computer-based interventions that reduce cognitive biases towards such cues may be efficacious and cost-effective cessation aids. In order to optimize such interventions, there needs to be better understanding of the mechanisms underlying the effects of cognitive bias modification (CBM). Here we present a protocol for an investigation of the neural effects of CBM and varenicline in non-quitting daily smokers. We will recruit 72 daily smokers who report smoking at least 10 manufactured cigarettes or 15 roll-ups per day and who smoke within one hour of waking. Participants will attend two sessions approximately one week apart. At the first session participants will be screened for eligibility and randomized to receive either varenicline or a placebo over a seven-day period. On the final drug-taking day (day seven) participants will attend a second session and be further randomized to one of three CBM conditions (training towards smoking cues, training away from smoking cues, or control training). Participants will then undergo a functional magnetic resonance imaging scan during which they will view smoking-related pictorial cues. Primary outcome measures are changes in cognitive bias as measured by the visual dot-probe task, and neural responses to smoking-related cues. Secondary outcome measures will be cognitive bias as measured by a transfer task (modified Stroop test of smoking-related cognitive bias) and subjective mood and cigarette craving. This study will add to the relatively small literature examining the effects of CBM in addictions. It will address novel questions regarding the neural effects of CBM. It will also investigate whether varenicline treatment alters neural response to smoking-related cues. These findings will inform future research that can develop behavioral treatments that target relapse prevention. Registered with Current Controlled Trials: ISRCTN65690030. Registered on 30 January 2014.
Boutelle, Kerri N; Kuckertz, Jennie M; Carlson, Jordan; Amir, Nader
2014-05-01
There are a number of neurocognitive and behavioral mechanisms that contribute to overeating and obesity, including an attentional bias to food cues. Attention modification programs, which implicitly train attention away from specific cues, have been used in anxiety and substance abuse, and could logically be applied to food cues. The purpose of this study was to evaluate the initial efficacy of a single session attention modification training for food cues (AMP) on overeating in overweight and obese children. Twenty-four obese children who eat in the absence of hunger participated in two visits and were assigned to an attention modification program (AMP) or attentional control program (ACC). The AMP program trained attention away 100% of the time from food words to neutral words. The ACC program trained attention 50% of the time to neutral and 50% of the time to food. Outcome measures included the eating in the absence of hunger free access session, and measures of craving, liking and salivation. Results revealed significant treatment effects for EAH percent and EAH kcal (group by time interactions p<.05). Children in the ACC condition showed a significant increase over time in the number of calories consumed in the free access session (within group t=3.09, p=.009) as well as the percent of daily caloric needs consumed in free access (within group t=3.37, p=.006), whereas children in the AMP group demonstrated slight decreases in these variables (within group t=-0.75 and -0.63, respectively). There was a trend suggesting a beneficial effect of AMP as compared to ACC for attentional bias (group by time interaction p=.073). Changes in craving, liking and saliva were not significantly different between groups (ps=.178-.527). This is the first study to demonstrate that an AMP program can influence eating in obese children. Larger studies are needed to replicate and extend these results. Copyright © 2014. Published by Elsevier Ltd.
Attentional bias modification encourages healthy eating.
Kakoschke, Naomi; Kemps, Eva; Tiggemann, Marika
2014-01-01
The continual exposure to unhealthy food cues in the environment encourages poor dietary habits, in particular consuming too much fat and sugar, and not enough fruit and vegetables. According to Berridge's (2009) model of food reward, unhealthy eating is a behavioural response to biased attentional processing. The present study used an established attentional bias modification paradigm to discourage the consumption of unhealthy food and instead promote healthy eating. Participants were 146 undergraduate women who were randomly assigned to two groups: one was trained to direct their attention toward pictures of healthy food ('attend healthy' group) and the other toward unhealthy food ('attend unhealthy' group). It was found that participants trained to attend to healthy food cues demonstrated an increased attentional bias for such cues and ate relatively more of the healthy than unhealthy snacks compared to the 'attend unhealthy' group. Theoretically, the results support the postulated link between biased attentional processing and consumption (Berridge, 2009). At a practical level, they offer potential scope for interventions that focus on eating well. Copyright © 2013 Elsevier Ltd. All rights reserved.
de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias
2018-06-01
To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.
Vijfhuizen, Malou; Bok, Harold; Matthew, Susan M; Del Piccolo, Lidia; McArthur, Michelle
2017-04-01
To explore the applicability, need for modifications and reliability of the VR-CoDES in a veterinary setting while also gaining a deeper understanding of clients' expressions of negative emotion and how they are addressed by veterinarians. The Verona Coding Definitions of Emotional Sequences for client cues and concerns (VR-CoDES-CC) and health provider responses (VR-CoDES-P) were used to analyse 20 audiotaped veterinary consultations. Inter-rater reliability was established. The applicability of definitions of the VR-CoDES was identified, together with the need for specific modifications to suit veterinary consultations. The VR-CoDES-CC and VR-CoDES-P generally applied to veterinary consultations. Cue and concern reliability was found satisfactory for most types of cues, but not for concerns. Response reliability was satisfactory for explicitness, and for providing and reducing space for further disclosure. Modifications to the original coding system were necessary to accurately reflect the veterinary context and included minor additions to the VR-CoDES-CC. Using minor additions to the VR-CoDES including guilt, reassurance and cost discussions it can be reliably adopted to assess clients' implicit expressions of negative emotion and veterinarians' responses. The modified VR-CoDES could be of great value when combined with existing frameworks used for teaching and researching veterinary communication. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Monaghan, Jessica J. M.; Seeber, Bernhard U.
2017-01-01
The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs. PMID:27586742
Boettcher, Johanna; Leek, Linda; Matson, Lisa; Holmes, Emily A.; Browning, Michael; MacLeod, Colin; Andersson, Gerhard; Carlbring, Per
2013-01-01
Biases in attention processes are thought to play a crucial role in the aetiology and maintenance of Social Anxiety Disorder (SAD). The goal of the present study was to examine the efficacy of a programme intended to train attention towards positive cues and a programme intended to train attention towards negative cues. In a randomised, controlled, double-blind design, the impact of these two training conditions on both selective attention and social anxiety were compared to that of a control training condition. A modified dot probe task was used, and delivered via the internet. A total of 129 individuals, diagnosed with SAD, were randomly assigned to one of these three conditions and took part in a 14-day programme with daily training/control sessions. Participants in all three groups did not on average display an attentional bias prior to the training. Critically, results on change in attention bias implied that significantly differential change in selective attention to threat was not detected in the three conditions. However, symptoms of social anxiety reduced significantly from pre- to follow-up-assessment in all three conditions (dwithin = 0.63–1.24), with the procedure intended to train attention towards threat cues producing, relative to the control condition, a significantly greater reduction of social fears. There were no significant differences in social anxiety outcome between the training condition intended to induce attentional bias towards positive cues and the control condition. To our knowledge, this is the first RCT where a condition intended to induce attention bias to negative cues yielded greater emotional benefits than a control condition. Intriguingly, changes in symptoms are unlikely to be by the mechanism of change in attention processes since there was no change detected in bias per se. Implications of this finding for future research on attention bias modification in social anxiety are discussed. Trial Registration ClinicalTrials.gov NCT01463137 PMID:24098630
Boettcher, Johanna; Leek, Linda; Matson, Lisa; Holmes, Emily A; Browning, Michael; MacLeod, Colin; Andersson, Gerhard; Carlbring, Per
2013-01-01
Biases in attention processes are thought to play a crucial role in the aetiology and maintenance of Social Anxiety Disorder (SAD). The goal of the present study was to examine the efficacy of a programme intended to train attention towards positive cues and a programme intended to train attention towards negative cues. In a randomised, controlled, double-blind design, the impact of these two training conditions on both selective attention and social anxiety were compared to that of a control training condition. A modified dot probe task was used, and delivered via the internet. A total of 129 individuals, diagnosed with SAD, were randomly assigned to one of these three conditions and took part in a 14-day programme with daily training/control sessions. Participants in all three groups did not on average display an attentional bias prior to the training. Critically, results on change in attention bias implied that significantly differential change in selective attention to threat was not detected in the three conditions. However, symptoms of social anxiety reduced significantly from pre- to follow-up-assessment in all three conditions (dwithin = 0.63-1.24), with the procedure intended to train attention towards threat cues producing, relative to the control condition, a significantly greater reduction of social fears. There were no significant differences in social anxiety outcome between the training condition intended to induce attentional bias towards positive cues and the control condition. To our knowledge, this is the first RCT where a condition intended to induce attention bias to negative cues yielded greater emotional benefits than a control condition. Intriguingly, changes in symptoms are unlikely to be by the mechanism of change in attention processes since there was no change detected in bias per se. Implications of this finding for future research on attention bias modification in social anxiety are discussed. ClinicalTrials.gov NCT01463137.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning
Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen
2011-01-01
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717
A magnetorheological haptic cue accelerator for manual transmission vehicles
NASA Astrophysics Data System (ADS)
Han, Young-Min; Noh, Kyung-Wook; Lee, Yang-Sub; Choi, Seung-Bok
2010-07-01
This paper proposes a new haptic cue function for manual transmission vehicles to achieve optimal gear shifting. This function is implemented on the accelerator pedal by utilizing a magnetorheological (MR) brake mechanism. By combining the haptic cue function with the accelerator pedal, the proposed haptic cue device can transmit the optimal moment of gear shifting for manual transmission to a driver without requiring the driver's visual attention. As a first step to achieve this goal, a MR fluid-based haptic device is devised to enable rotary motion of the accelerator pedal. Taking into account spatial limitations, the design parameters are optimally determined using finite element analysis to maximize the relative control torque. The proposed haptic cue device is then manufactured and its field-dependent torque and time response are experimentally evaluated. Then the manufactured MR haptic cue device is integrated with the accelerator pedal. A simple virtual vehicle emulating the operation of the engine of a passenger vehicle is constructed and put into communication with the haptic cue device. A feed-forward torque control algorithm for the haptic cue is formulated and control performances are experimentally evaluated and presented in the time domain.
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
Yoo, Junsang; Chang, Yujung; Kim, Hongwon; Baek, Soonbong; Choi, Hwan; Jeong, Gun-Jae; Shin, Jaein; Kim, Hongnam; Kim, Byung-Soo; Kim, Jongpil
2017-03-01
Induced cardiomyocytes (iCMs) generated via direct lineage reprogramming offer a novel therapeutic target for the study and treatment of cardiac diseases. However, the efficiency of iCM generation is significantly low for therapeutic applications. Here, we show an efficient direct conversion of somatic fibroblasts into iCMs using nanotopographic cues. Compared with flat substrates, the direct conversion of fibroblasts into iCMs on nanopatterned substrates resulted in a dramatic increase in the reprogramming efficiency and maturation of iCM phenotypes. Additionally, enhanced reprogramming by substrate nanotopography was due to changes in the activation of focal adhesion kinase and specific histone modifications. Taken together, these results suggest that nanotopographic cues can serve as an efficient stimulant for direct lineage reprogramming into iCMs.
Mondragón, Esther; Gray, Jonathan; Alonso, Eduardo; Bonardi, Charlotte; Jennings, Dómhnall J.
2014-01-01
This paper presents a novel representational framework for the Temporal Difference (TD) model of learning, which allows the computation of configural stimuli – cumulative compounds of stimuli that generate perceptual emergents known as configural cues. This Simultaneous and Serial Configural-cue Compound Stimuli Temporal Difference model (SSCC TD) can model both simultaneous and serial stimulus compounds, as well as compounds including the experimental context. This modification significantly broadens the range of phenomena which the TD paradigm can explain, and allows it to predict phenomena which traditional TD solutions cannot, particularly effects that depend on compound stimuli functioning as a whole, such as pattern learning and serial structural discriminations, and context-related effects. PMID:25054799
Sustained effects of attentional re-training on chocolate consumption.
Kemps, Eva; Tiggemann, Marika; Elford, Joanna
2015-12-01
Accumulating evidence shows that cognitive bias modification produces immediate changes in attentional bias for, and consumption of, rewarding substances including food. This study examined the longevity of these attentional bias modification effects. A modified dot probe paradigm was used to determine whether alterations in biased attentional processing of food cues, and subsequent effects on consumption, were maintained at 24-h and one-week follow-up. One hundred and forty-nine undergraduate women were trained to direct their attention toward ('attend') or away from ('avoid') food cues (i.e., pictures of chocolate). Within each group, half received a single training session, the other half completed 5 weekly training sessions. Attentional bias for chocolate cues increased in the 'attend' group, and decreased in the 'avoid' group immediately post training. Participants in the 'avoid' group also ate disproportionately less of a chocolate food product in a so-called taste test than did those in the 'attend' group. Importantly, the observed re-training effects were maintained 24 h later and also one week later, but only following multiple training sessions. There are a number of limitations that could be addressed in future research: (a) the inclusion of a no-training control group, (b) the inclusion of a suspicion probe to detect awareness of the purpose of the taste test, and (c) the use of different tasks to assess and re-train attentional bias. The results showed sustained effects of attentional re-training on attentional bias and consumption. They further demonstrate the importance of administering multiple re-training sessions in attentional bias modification protocols. Copyright © 2014 Elsevier Ltd. All rights reserved.
Driver Distraction Using Visual-Based Sensors and Algorithms.
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-10-28
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.
Driver Distraction Using Visual-Based Sensors and Algorithms
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-01-01
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Trans-algorithmic nature of learning in biological systems.
Shimansky, Yury P
2018-05-02
Learning ability is a vitally important, distinctive property of biological systems, which provides dynamic stability in non-stationary environments. Although several different types of learning have been successfully modeled using a universal computer, in general, learning cannot be described by an algorithm. In other words, algorithmic approach to describing the functioning of biological systems is not sufficient for adequate grasping of what is life. Since biosystems are parts of the physical world, one might hope that adding some physical mechanisms and principles to the concept of algorithm could provide extra possibilities for describing learning in its full generality. However, a straightforward approach to that through the so-called physical hypercomputation so far has not been successful. Here an alternative approach is proposed. Biosystems are described as achieving enumeration of possible physical compositions though random incremental modifications inflicted on them by active operating resources (AORs) in the environment. Biosystems learn through algorithmic regulation of the intensity of the above modifications according to a specific optimality criterion. From the perspective of external observers, biosystems move in the space of different algorithms driven by random modifications imposed by the environmental AORs. A particular algorithm is only a snapshot of that motion, while the motion itself is essentially trans-algorithmic. In this conceptual framework, death of unfit members of a population, for example, is viewed as a trans-algorithmic modification made in the population as a biosystem by environmental AORs. Numerous examples of AOR utilization in biosystems of different complexity, from viruses to multicellular organisms, are provided.
Mogg, Karin; Waters, Allison M.; Bradley, Brendan P.
2017-01-01
Attention bias modification (ABM) aims to reduce anxiety by reducing attention bias (AB) to threat; however, effects on anxiety and AB are variable. This review examines 34 studies assessing effects of multisession-ABM on both anxiety and AB in high-anxious individuals. Methods include ABM-threat-avoidance (promoting attention-orienting away from threat), ABM-positive-search (promoting explicit, goal-directed attention-search for positive/nonthreat targets among negative/threat distractors), and comparison conditions (e.g., control-attention training combining threat-cue exposure and attention-task practice without AB-modification). Findings indicate anxiety reduction often occurs during both ABM-threat-avoidance and control-attention training; anxiety reduction is not consistently accompanied by AB reduction; anxious individuals often show no pretraining AB in orienting toward threat; and ABM-positive-search training appears promising in reducing anxiety. Methodological and theoretical issues are discussed concerning ABM paradigms, comparison conditions, and AB assessment. ABM methods combining explicit goal-directed attention-search for nonthreat/positive information and effortful threat-distractor inhibition (promoting top-down cognitive control during threat-cue exposure) warrant further evaluation. PMID:28752017
An Attempt to Target Anxiety Sensitivity via Cognitive Bias Modification
Clerkin, Elise M.; Beard, Courtney; Fisher, Christopher R.; Schofield, Casey A
2015-01-01
Our goals in the present study were to test an adaptation of a Cognitive Bias Modification program to reduce anxiety sensitivity, and to evaluate the causal relationships between interpretation bias of physiological cues, anxiety sensitivity, and anxiety and avoidance associated with interoceptive exposures. Participants with elevated anxiety sensitivity who endorsed having a panic attack or limited symptom attack were randomly assigned to either an Interpretation Modification Program (IMP; n = 33) or a Control (n = 32) condition. During interpretation modification training (via the Word Sentence Association Paradigm), participants read short sentences describing ambiguous panic-relevant physiological and cognitive symptoms and were trained to endorse benign interpretations and reject threatening interpretations associated with these cues. Compared to the Control condition, IMP training successfully increased endorsements of benign interpretations and decreased endorsements of threatening interpretations at visit 2. Although self-reported anxiety sensitivity decreased from pre-selection to visit 1 and from visit 1 to visit 2, the reduction was not larger for the experimental versus control condition. Further, participants in IMP (vs. Control) training did not experience less anxiety and avoidance associated with interoceptive exposures. In fact, there was some evidence that those in the Control condition experienced less avoidance following training. Potential explanations for the null findings, including problems with the benign panic-relevant stimuli and limitations with the control condition, are discussed. PMID:25692491
An attempt to target anxiety sensitivity via cognitive bias modification.
Clerkin, Elise M; Beard, Courtney; Fisher, Christopher R; Schofield, Casey A
2015-01-01
Our goals in the present study were to test an adaptation of a Cognitive Bias Modification program to reduce anxiety sensitivity, and to evaluate the causal relationships between interpretation bias of physiological cues, anxiety sensitivity, and anxiety and avoidance associated with interoceptive exposures. Participants with elevated anxiety sensitivity who endorsed having a panic attack or limited symptom attack were randomly assigned to either an Interpretation Modification Program (IMP; n = 33) or a Control (n = 32) condition. During interpretation modification training (via the Word Sentence Association Paradigm), participants read short sentences describing ambiguous panic-relevant physiological and cognitive symptoms and were trained to endorse benign interpretations and reject threatening interpretations associated with these cues. Compared to the Control condition, IMP training successfully increased endorsements of benign interpretations and decreased endorsements of threatening interpretations at visit 2. Although self-reported anxiety sensitivity decreased from pre-selection to visit 1 and from visit 1 to visit 2, the reduction was not larger for the experimental versus control condition. Further, participants in IMP (vs. Control) training did not experience less anxiety and avoidance associated with interoceptive exposures. In fact, there was some evidence that those in the Control condition experienced less avoidance following training. Potential explanations for the null findings, including problems with the benign panic-relevant stimuli and limitations with the control condition, are discussed.
Con-Text: Text Detection for Fine-grained Object Classification.
Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo
2017-05-24
This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).
Attentional bias to food-related visual cues: is there a role in obesity?
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
2015-02-01
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Aarts, Esther; Roelofs, Ardi; van Turennout, Miranda
2008-04-30
Previous studies have found no agreement on whether anticipatory activity in the anterior cingulate cortex (ACC) reflects upcoming conflict, error likelihood, or actual control adjustments. Using event-related functional magnetic resonance imaging, we investigated the nature of preparatory activity in the ACC. Informative cues told the participants whether an upcoming target would or would not involve conflict in a Stroop-like task. Uninformative cues provided no such information. Behavioral responses were faster after informative than after uninformative cues, indicating cue-based adjustments in control. ACC activity was larger after informative than uninformative cues, as would be expected if the ACC is involved in anticipatory control. Importantly, this activation in the ACC was observed for informative cues even when the information conveyed by the cue was that the upcoming target evokes no response conflict and has low error likelihood. This finding demonstrates that the ACC is involved in anticipatory control processes independent of upcoming response conflict or error likelihood. Moreover, the response of the ACC to the target stimuli was critically dependent on whether the cue was informative or not. ACC activity differed among target conditions after uninformative cues only, indicating ACC involvement in actual control adjustments. Together, these findings argue strongly for a role of the ACC in anticipatory control independent of anticipated conflict and error likelihood, and also show that such control can eliminate conflict-related ACC activity during target processing. Models of frontal cortex conflict-detection and conflict-resolution mechanisms require modification to include consideration of these anticipatory control properties of the ACC.
Smokers exhibit biased neural processing of smoking and affective images.
Oliver, Jason A; Jentink, Kade G; Drobes, David J; Evans, David E
2016-08-01
There has been growing interest in the role that implicit processing of drug cues can play in motivating drug use behavior. However, the extent to which drug cue processing biases relate to the processing biases exhibited to other types of evocative stimuli is largely unknown. The goal of the present study was to determine how the implicit cognitive processing of smoking cues relates to the processing of affective cues using a novel paradigm. Smokers (n = 50) and nonsmokers (n = 38) completed a picture-viewing task, in which participants were presented with a series of smoking, pleasant, unpleasant, and neutral images while engaging in a distractor task designed to direct controlled resources away from conscious processing of image content. Electroencephalogram recordings were obtained throughout the task for extraction of event-related potentials (ERPs). Smokers exhibited differential processing of smoking cues across 3 different ERP indices compared with nonsmokers. Comparable effects were found for pleasant cues on 2 of these indices. Late cognitive processing of smoking and pleasant cues was associated with nicotine dependence and cigarette use. Results suggest that cognitive biases may extend across classes of stimuli among smokers. This raises important questions about the fundamental meaning of cognitive biases, and suggests the need to consider generalized cognitive biases in theories of drug use behavior and interventions based on cognitive bias modification. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Monocular Depth Perception and Robotic Grasping of Novel Objects
2009-06-01
resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show
Differential processing of binocular and monocular gloss cues in human visual cortex
Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.
2016-01-01
The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596
On the ability of human listeners to distinguish between front and back.
Zhang, Peter Xinya; Hartmann, William M
2010-02-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 micros, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. Copyright 2009 Elsevier B.V. All rights reserved.
On the ability of human listeners to distinguish between front and back
Zhang, Peter Xinya; Hartmann, William M.
2009-01-01
In order to determine whether a sound source is in front or in back, listeners can use location-dependent spectral cues caused by diffraction from their anatomy. This capability was studied using a precise virtual-reality technique (VRX) based on a transaural technology. Presented with a virtual baseline simulation accurate up to 16 kHz, listeners could not distinguish between the simulation and a real source. Experiments requiring listeners to discriminate between front and back locations were performed using controlled modifications of the baseline simulation to test hypotheses about the important spectral cues. The experiments concluded: (1) Front/back cues were not confined to any particular 1/3rd or 2/3rd octave frequency region. Often adequate cues were available in any of several disjoint frequency regions. (2) Spectral dips were more important than spectral peaks. (3) Neither monaural cues nor interaural spectral level difference cues were adequate. (4) Replacing baseline spectra by sharpened spectra had minimal effect on discrimination performance. (5) When presented with an interaural time difference less than 200 μs, which pulled the image to the side, listeners still successfully discriminated between front and back, suggesting that front/back discrimination is independent of azimuthal localization within certain limits. PMID:19900525
Enhancements to AERMOD’s Building Downwash Algorithms based on Wind Tunnel and Embedded-LES Modeling
This presentation presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD’s building pre-processor to better represent elongated buildings in oblique wind...
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
Sensorimotor Model of Obstacle Avoidance in Echolocating Bats
Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert
2015-01-01
Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel
2015-01-01
Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.
Differential processing of binocular and monocular gloss cues in human visual cortex.
Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E
2016-06-01
The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.
Self-referential forces are sufficient to explain different dendritic morphologies
Memelli, Heraldo; Torben-Nielsen, Benjamin; Kozloski, James
2013-01-01
Dendritic morphology constrains brain activity, as it determines first which neuronal circuits are possible and second which dendritic computations can be performed over a neuron's inputs. It is known that a range of chemical cues can influence the final shape of dendrites during development. Here, we investigate the extent to which self-referential influences, cues generated by the neuron itself, might influence morphology. To this end, we developed a phenomenological model and algorithm to generate virtual morphologies, which are then compared to experimentally reconstructed morphologies. In the model, branching probability follows a Galton–Watson process, while the geometry is determined by “homotypic forces” exerting influence on the direction of random growth in a constrained space. We model three such homotypic forces, namely an inertial force based on membrane stiffness, a soma-oriented tropism, and a force of self-avoidance, as directional biases in the growth algorithm. With computer simulations we explored how each bias shapes neuronal morphologies. We show that based on these principles, we can generate realistic morphologies of several distinct neuronal types. We discuss the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry. PMID:23386828
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Keough, Dwayne
2011-01-01
Research on the control of visually guided limb movements indicates that the brain learns and continuously updates an internal model that maps the relationship between motor commands and sensory feedback. A growing body of work suggests that an internal model that relates motor commands to sensory feedback also supports vocal control. There is evidence from arm-reaching studies that shows that when provided with a contextual cue, the motor system can acquire multiple internal models, which allows an animal to adapt to different perturbations in diverse contexts. In this study we show that trained singers can rapidly acquire multiple internal models regarding voice fundamental frequency (F0). These models accommodate different perturbations to ongoing auditory feedback. Participants heard three musical notes and reproduced each one in succession. The musical targets could serve as a contextual cue to indicate which direction (up or down) feedback would be altered on each trial; however, participants were not explicitly instructed to use this strategy. When participants were gradually exposed to altered feedback adaptation was observed immediately following vocal onset. Aftereffects were target specific and did not influence vocal productions on subsequent trials. When target notes were no longer a contextual cue, adaptation occurred during altered feedback trials and evidence for trial-by-trial adaptation was found. These findings indicate that the brain is exceptionally sensitive to the deviations between auditory feedback and the predicted consequence of a motor command during vocalization. Moreover, these results indicate that, with contextual cues, the vocal control system may maintain multiple internal models that are capable of independent modification during different tasks or environments. PMID:21346208
Das, Ravi K.; Gale, Grace; Hennessy, Vanessa; Kamboj, Sunjeev K.
2018-01-01
Maladaptive reward memories (MRMs) can become unstable following retrieval under certain conditions, allowing their modification by subsequent new learning. However, robust (well-rehearsed) and chronologically old MRMs, such as those underlying substance use disorders, do not destabilize easily when retrieved. A key determinate of memory destabilization during retrieval is prediction error (PE). We describe a retrieval procedure for alcohol MRMs in hazardous drinkers that specifically aims to maximize the generation of PE and therefore the likelihood of MRM destabilization. The procedure requires explicitly generating the expectancy of alcohol consumption and then violating this expectancy (withholding alcohol) following the presentation of a brief set of prototypical alcohol cue images (retrieval + PE). Control procedures involve presenting the same cue images, but allow alcohol to be consumed, generating minimal PE (retrieval-no PE) or generate PE without retrieval of alcohol MRMs, by presenting orange juice cues (no retrieval + PE). Subsequently, we describe a multisensory disgust-based counterconditioning procedure to probe MRM destabilization by re-writing alcohol cue-reward associations prior to reconsolidation. This procedure pairs alcohol cues with images invoking pathogen disgust and an extremely bitter-tasting solution (denatonium benzoate), generating gustatory disgust. Following retrieval + PE, but not no retrieval + PE or retrieval-no PE, counterconditioning produces evidence of MRM rewriting as indexed by lasting reductions in alcohol cue valuation, attentional capture, and alcohol craving. PMID:29364255
Analysis of Modification Mechanism of Gait with Rhythmic Cueing Training Paradigm
NASA Astrophysics Data System (ADS)
Muto, Takeshi; Kanai, Tetsuya; Sakuta, Hiroshi; Miyake, Yoshihiro
In this research, we applied the gait training method which takes in the rhythmic auditory stimulation as a pace maker to the assistance of gait motion, and analyzed the dynamical stability of the period and trajectory of the lower limbs' motions. As the result, it was clarified that, in the training style which presents a constant rhythm, trajectory of ankles was modified as the stable state which has the historical property, but the period of footsteps was not modified but susceptible to the external environment. This result suggests that the hierarchical modification mechanism of motor schema of gait is realized by the connection between the immediate and historical modification system.
Predicting Intelligibility Gains in Dysarthria through Automated Speech Feature Analysis
ERIC Educational Resources Information Center
Fletcher, Annalise R.; Wisler, Alan A.; McAuliffe, Megan J.; Lansford, Kaitlin L.; Liss, Julie M.
2017-01-01
Purpose: Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, &…
Zhang, Xinran; Li, Haotian; Lin, Chucheng; Ning, Congqin; Lin, Kaili
2018-01-30
Both the topographic surface and chemical composition modification can enhance rapid osteogenic differentiation and bone formation. Till now, the synergetic effects of topography and chemistry cues guiding biological responses have been rarely reported. Herein, the ordered micro-patterned topography and classically essential trace element of strontium (Sr) ion doping were selected to imitate topography and chemistry cues, respectively. The ordered micro-patterned topography on Sr ion-doped bioceramics was successfully duplicated using the nylon sieve as the template. Biological response results revealed that the micro-patterned topography design or Sr doping could promote cell attachment, ALP activity, and osteogenic differentiation of bone marrow-derived mesenchymal stem cells (BMSCs). Most importantly, the samples both with micro-patterned topography and Sr doping showed the highest promotion effects, and could synergistically activate the ERK1/2 and p38 MAPK signaling pathways. The results suggested that the grafts with both specific topography and chemistry cues have synergetic effects on osteogenic activity of BMSCs and provide an effective approach to design functional bone grafts and cell culture substrates.
NASA Technical Reports Server (NTRS)
Richards, J. T.; Mulavara, A. P.; Ruttley, T.; Peters, B. T.; Warren, L. E.; Bloomberg, J. J.
2006-01-01
We have previously shown that viewing simulated rotary self-motion during treadmill locomotion causes adaptive modification of the control of position and trajectory during over-ground locomotion, which functionally reflects adaptive changes in the sensorimotor integration of visual, vestibular, and proprioceptive cues (Mulavara et al., 2005). The objective of this study was to investigate how strategic changes in torso control during exposure to simulated rotary self-motion during treadmill walking influences adaptive modification of locomotor heading direction during over-ground stepping.
Beardsley, Patrick M.; Shelton, Keith L.
2012-01-01
This unit describes the testing of rats in prime-, footshock- and cue-induced reinstatement procedures. Evaluating rats in these procedures enables the assessment of treatments on behavior thought to model drug relapse precipitated by re-contact with an abused drug (prime-induced), induced by stress (footshock-induced), or by stimuli previously associated with drug administration (cue-induced). For instance, levels of reinstatement under the effects of test compound administration could be compared to levels under vehicle administration to help identify potential treatments for drug relapse, or reinstatement levels of different rat strains could be compared to identify potential genetic determinants of perseverative drug-seeking behavior. Cocaine is used as a prototypical drug of abuse, and relapse to its use serves as the model in this unit, but other self-administered drugs could readily be substituted with little modification to the procedures. PMID:23093352
Cue exposure in moderation drinking: a comparison with cognitive-behavior therapy.
Sitharthan, T; Sitharthan, G; Hough, M J; Kavanagh, D J
1997-10-01
To date, the published controlled trials on exposure to alcohol cues have had an abstinence treatment goal. A modification of cue exposure (CE) for moderation drinking, which incorporated priming doses of alcohol, could train participants to stop drinking after 2 to 3 drinks. This study examined the effects of modified CE within sessions, combined with directed homework practice. Nondependent problem drinkers who requested a moderation drinking goal were randomly allocated to modified CE or standard cognitive-behavior therapy (CBT) for alcohol abuse. Both interventions were delivered in 6 90-min group sessions. Eighty-one percent of eligible participants completed treatment and follow-up assessment. Over 6 months, CE produced significantly greater reductions than CBT in participants' reports of drinking frequency and consumption on each occasion. No pretreatment variables significantly predicted outcome. The modified CE procedure appears viable for nondependent drinkers who want to adopt a moderate drinking goal.
The Effects of Spatial Endogenous Pre-cueing across Eccentricities
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353
The Effects of Spatial Endogenous Pre-cueing across Eccentricities.
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Vertical Motion Simulator Experiment on Stall Recovery Guidance
NASA Technical Reports Server (NTRS)
Schuet, Stefan; Lombaerts, Thomas; Stepanyan, Vahram; Kaneshige, John; Shish, Kimberlee; Robinson, Peter; Hardy, Gordon H.
2017-01-01
A stall recovery guidance system was designed to help pilots improve their stall recovery performance when the current aircraft state may be unrecognized under various complicating operational factors. Candidate guidance algorithms were connected to the split-cue pitch and roll flight directors that are standard on large transport commercial aircraft. A new thrust guidance algorithm and cue was also developed to help pilots prevent the combination of excessive thrust and nose-up stabilizer trim. The overall system was designed to reinforce the current FAA recommended stall recovery procedure. A general transport aircraft model, similar to a Boeing 757, with an extended aerodynamic database for improved stall dynamics simulation fidelity was integrated into the Vertical Motion Simulator at NASA Ames Research Center. A detailed study of the guidance system was then conducted across four stall scenarios with 30 commercial and 10 research test pilots, and the results are reported.
Hunger-dependent and Sex-specific Antipredator Behaviour of Larvae of a Size-dimorphic Mosquito
Wormington, Jillian; Juliano, Steven
2014-01-01
1. Modification of behaviors in the presence of predators or predation cues is widespread among animals. Costs of a behavioral change in the presence of predators or predation cues depend on fitness effects of lost feeding opportunities and, especially when organisms are sexually dimorphic in size or timing of maturation, these costs are expected to differ between the sexes. 2. Larval Aedes triseriatus (Say) (Diptera: Culicidae) were used to test the hypothesis that behavioral responses of the sexes to predation cues have been selected differently due to different energy demands. 3. Even in the absence of water-borne predation cues, hungry females (the larger sex) spent more time browsing than did males, indicating a difference in energy needs. 4. In the presence of predation cues, well-fed larvae of both sexes reduced their activity more than did hungry larvae, and males shifted away from high-risk behaviors to a greater degree than did females, providing the first evidence of sex-specific antipredator behavior in foraging mosquito larvae. 5. Because sexual size dimorphism is common across taxa, and energetic demands are likely correlated with size dimorphism, this research demonstrates the importance of investigating sex specific behavior and behavioral responses to enemies and cautions against generalizing results between sexes. PMID:25309025
Post-transcriptional modifications in development and stem cells.
Frye, Michaela; Blanco, Sandra
2016-11-01
Cells adapt to their environment by linking external stimuli to an intricate network of transcriptional, post-transcriptional and translational processes. Among these, mechanisms that couple environmental cues to the regulation of protein translation are not well understood. Chemical modifications of RNA allow rapid cellular responses to external stimuli by modulating a wide range of fundamental biochemical properties and processes, including the stability, splicing and translation of messenger RNA. In this Review, we focus on the occurrence of N 6 -methyladenosine (m 6 A), 5-methylcytosine (m 5 C) and pseudouridine (Ψ) in RNA, and describe how these RNA modifications are implicated in regulating pluripotency, stem cell self-renewal and fate specification. Both post-transcriptional modifications and the enzymes that catalyse them modulate stem cell differentiation pathways and are essential for normal development. © 2016. Published by The Company of Biologists Ltd.
Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models
Najnin, Shamima; Banerjee, Bonny
2018-01-01
Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task. PMID:29441027
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
Dual pathways to prospective remembering
McDaniel, Mark A.; Umanath, Sharda; Einstein, Gilles O.; Waldum, Emily R.
2015-01-01
According to the multiprocess framework (McDaniel and Einstein, 2000), the cognitive system can support prospective memory (PM) retrieval through two general pathways. One pathway depends on top–down attentional control processes that maintain activation of the intention and/or monitor the environment for the triggering or target cues that indicate that the intention should be executed. A second pathway depends on (bottom–up) spontaneous retrieval processes, processes that are often triggered by a PM target cue; critically, spontaneous retrieval is assumed not to require monitoring or active maintenance of the intention. Given demand characteristics associated with experimental settings, however, participants are often inclined to monitor, thereby potentially masking discovery of bottom–up spontaneous retrieval processes. In this article, we discuss parameters of laboratory PM paradigms to discourage monitoring and review recent behavioral evidence from such paradigms that implicate spontaneous retrieval in PM. We then re-examine the neuro-imaging evidence from the lens of the multiprocess framework and suggest some critical modifications to existing neuro-cognitive interpretations of the neuro-imaging results. These modifications illuminate possible directions and refinements for further neuro-imaging investigations of PM. PMID:26236213
Epigenetic Influence of Stress and the Social Environment
Gudsnuk, Kathryn; Champagne, Frances A.
2012-01-01
Animal models of early-life stress and variation in social experience across the lifespan have contributed significantly to our understanding of the environmental regulation of the developing brain. Plasticity in neurobiological pathways regulating stress responsivity, cognition, and reproductive behavior is apparent during the prenatal period and continues into adulthood, suggesting a lifelong sensitivity to environmental cues. Recent evidence suggests that dynamic epigenetic changes—molecular modifications that alter gene expression without altering the underlying DNA sequence—account for this plasticity. In this review, we highlight studies of laboratory rodents that illustrate the association between the experience of prenatal stress, maternal separation, maternal care, abusive caregiving in infancy, juvenile social housing, and adult social stress and variation in DNA methylation and histone modification. Moreover, we discuss emerging evidence for the transgenerational impact of these experiences. These experimental paradigms have yielded insights into the potential role of epigenetic mechanisms in mediating the effects of the environment on human development and also indicate that consideration of the sensitivity of laboratory animals to environmental cues may be an important factor in predicting long-term health and welfare. PMID:23744967
NASA Astrophysics Data System (ADS)
Akhmedova, Sh; Semenkin, E.
2017-02-01
Previously, a meta-heuristic approach, called Co-Operation of Biology-Related Algorithms or COBRA, for solving real-parameter optimization problems was introduced and described. COBRA’s basic idea consists of a cooperative work of five well-known bionic algorithms such as Particle Swarm Optimization, the Wolf Pack Search, the Firefly Algorithm, the Cuckoo Search Algorithm and the Bat Algorithm, which were chosen due to the similarity of their schemes. The performance of this meta-heuristic was evaluated on a set of test functions and its workability was demonstrated. Thus it was established that the idea of the algorithms’ cooperative work is useful. However, it is unclear which bionic algorithms should be included in this cooperation and how many of them. Therefore, the five above-listed algorithms and additionally the Fish School Search algorithm were used for the development of five different modifications of COBRA by varying the number of component-algorithms. These modifications were tested on the same set of functions and the best of them was found. Ways of further improving the COBRA algorithm are then discussed.
Improved document image segmentation algorithm using multiresolution morphology
NASA Astrophysics Data System (ADS)
Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.
2011-01-01
Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.
Implementation issues in source coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.
1989-01-01
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.
Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald
2014-01-01
Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.
A pilot study of a novel smartphone application for the estimation of sleep onset.
Scott, Hannah; Lack, Leon; Lovato, Nicole
2018-02-01
The aim of the study was to investigate the accuracy of Sleep On Cue: a novel iPhone application that uses behavioural responses to auditory stimuli to estimate sleep onset. Twelve young adults underwent polysomnography recording while simultaneously using Sleep On Cue. Participants completed as many sleep-onset trials as possible within a 2-h period following their normal bedtime. On each trial, participants were awoken by the app following behavioural sleep onset. Then, after a short break of wakefulness, commenced the next trial. There was a high degree of correspondence between polysomnography-determined sleep onset and Sleep On Cue behavioural sleep onset, r = 0.79, P < 0.001. On average, Sleep On Cue overestimated sleep-onset latency by 3.17 min (SD = 3.04). When polysomnography sleep onset was defined as the beginning of N2 sleep, the discrepancy was reduced considerably (M = 0.81, SD = 1.96). The discrepancy between polysomnography and Sleep On Cue varied between individuals, which was potentially due to variations in auditory stimulus intensity. Further research is required to determine whether modifications to the stimulus intensity and behavioural response could improve the accuracy of the app. Nonetheless, Sleep On Cue is a viable option for estimating sleep onset and may be used to administer Intensive Sleep Retraining or facilitate power naps in the home environment. © 2017 European Sleep Research Society.
Primary and secondary effects of real-time feedback to reduce vertical loading rate during running.
Baggaley, M; Willy, R W; Meardon, S A
2017-05-01
Gait modifications are often proposed to reduce average loading rate (AVLR) during running. While many modifications may reduce AVLR, little work has investigated secondary gait changes. Thirty-two rearfoot runners [16M, 16F, 24.7 (3.3) years, 22.72 (3.01) kg/m 2 , >16 km/week] ran at a self-selected speed (2.9 ± 0.3 m/s) on an instrumented treadmill, while 3D mechanics were calculated via real-time data acquisition. Real-time visual feedback was provided in a randomized order to cue a forefoot strike (FFS), a minimum 7.5% decrease in step length, or a minimum 15% reduction in AVLR. AVLR was reduced by FFS (mean difference = 26.4 BW/s; 95% CI = 20.1, 32.7; P < 0.001), shortened step length (8.4 BW/s; 95% CI = 2.9, 14.0; P = 0.004), and cues to reduce AVLR (14.9 BW/s; 95% CI = 10.2, 19.6; P < 0.001). FFS, shortened step length, and cues to reduce AVLR all reduced eccentric knee joint work per km [(-48.2 J/kg*m; 95% CI = -58.1, -38.3; P < 0.001), (-35.5 J/kg*m; 95% CI = -42.4, 28.6; P < 0.001), (-23.1 J/kg*m; 95% CI = -33.3, -12.9; P < 0.001)]. However, FFS and cues to reduce AVLR also increased eccentric ankle joint work per km [(54.49 J/kg*m; 95% CI = 45.3, 63.7; P < 0.001), (9.20 J/kg*m; 95% CI = 1.7, 16.7; P = 0.035)]. Potentially injurious secondary effects associated with FFS and cues to reduce AVLR may undermine their clinical utility. Alternatively, a shortened step length resulted in small reductions in AVLR, without any potentially injurious secondary effects. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Approach bias modification training and consumption: A review of the literature.
Kakoschke, Naomi; Kemps, Eva; Tiggemann, Marika
2017-01-01
Recent theoretical perspectives and empirical evidence have suggested that biased cognitive processing is an important contributor to unhealthy behaviour. Approach bias modification is a novel intervention in which approach biases for appetitive cues are modified. The current review of the literature aimed to evaluate the effectiveness of modifying approach bias for harmful consumption behaviours, including alcohol use, cigarette smoking, and unhealthy eating. Relevant publications were identified through a search of four electronic databases (PsycINFO, Google Scholar, ScienceDirect and Scopus) that were conducted between October and December 2015. Eligibility criteria included the use of a human adult sample, at least one session of avoidance training, and an outcome measure related to the behaviour of interest. The fifteen identified publications (comprising 18 individual studies) were coded on a number of characteristics, including consumption behaviour, participants, task, training and control conditions, number of training sessions and trials, outcome measure, and results. The results generally showed positive effects of approach-avoidance training, including reduced consumption behaviour in the laboratory, lower relapse rates, and improvements in self-reported measures of behaviour. Importantly, all studies (with one exception) that reported favourable consumption outcomes also demonstrated successful reduction of the approach bias for appetitive cues. Thus, the current review concluded that approach bias modification is effective for reducing both approach bias and unhealthy consumption behaviour. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support
Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.
2014-01-01
Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768
Using Blur to Affect Perceived Distance and Size
HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.
2011-01-01
We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429
Eusocial insects as emerging models for behavioural epigenetics.
Yan, Hua; Simola, Daniel F; Bonasio, Roberto; Liebig, Jürgen; Berger, Shelley L; Reinberg, Danny
2014-10-01
Understanding the molecular basis of how behavioural states are established, maintained and altered by environmental cues is an area of considerable and growing interest. Epigenetic processes, including methylation of DNA and post-translational modification of histones, dynamically modulate activity-dependent gene expression in neurons and can therefore have important regulatory roles in shaping behavioural responses to environmental cues. Several eusocial insect species - with their unique displays of behavioural plasticity due to age, morphology and social context - have emerged as models to investigate the genetic and epigenetic underpinnings of animal social behaviour. This Review summarizes recent studies in the epigenetics of social behaviour and offers perspectives on emerging trends and prospects for establishing genetic tools in eusocial insects.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-01-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-06-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Appetite-Focused Cognitive-Behavioral Therapy in the Treatment of Binge Eating with Purging
ERIC Educational Resources Information Center
Dicker, Stacy L.; Craighead, Linda Wilcoxon
2004-01-01
The first-line treatment for bulimia nervosa (BN), cognitive-behavioral therapy (CBT), uses food-based self-monitoring. Six young women presenting with BN or significant purging behavior were treated with a modification, Appetite-Focused CBT (CBT-AF), in which self-monitoring is based on appetite cues and food monitoring is proscribed. This change…
Hierarchical layered and semantic-based image segmentation using ergodicity map
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing
2010-04-01
Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.
Heath-Heckman, Elizabeth A.C.; Foster, Jamie; Apicella, Michael A.; Goldman, William E.; McFall-Ngai, Margaret
2016-01-01
Summary Recent research has shown that the microbiota affects the biology of associated host epithelial tissues, including their circadian rhythms, although few data are available on how such influences shape the microarchitecture of the brush border. The squid-vibrio system exhibits two modifications of the brush border that supports the symbionts: effacement and repolarization. Together these occur on a daily rhythm in adult animals, at the dawn expulsion of symbionts into the environment, and symbiont colonization of the juvenile host induces an increase in microvillar density. Here we sought to define how these processes are related and the roles of both symbiont colonization and environmental cues. Ultrastructural analyses showed that the juvenile-organ brush borders also efface concomitantly with daily dawn-cued expulsion of symbionts. Manipulation of the environmental light cue and juvenile symbiotic state demonstrated that this behaviour requires the light cue, but not colonization. In contrast, symbionts were required for the observed increase in microvillar density that accompanies post dawn brush-border repolarization; this increase was induced solely by host exposure to phosphorylated lipid A of symbiont cells. These data demonstrate that a partnering of environmental and symbiont cues shapes the brush border and that microbe-associated molecular patterns play a role in the regulation of brush-border microarchitecture. PMID:27062511
Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.
ERIC Educational Resources Information Center
Lane, Irving M.; And Others
1981-01-01
Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…
Afzal, Muhammad Raheel; Pyo, Sanghun; Oh, Min-Kyun; Park, Young Sook; Yoon, Jungwon
2018-04-16
Integration of kinesthetic and tactile cues for application to post-stroke gait rehabilitation is a novel concept which needs to be explored. The combined provision of haptic cues may result in collective improvement of gait parameters such as symmetry, balance and muscle activation patterns. Our proposed integrated cue system can offer a cost-effective and voluntary gait training experience for rehabilitation of subjects with unilateral hemiparetic stroke. Ten post-stroke ambulatory subjects participated in a 10 m walking trial while utilizing the haptic cues (either alone or integrated application), at their preferred and increased gait speeds. In the system a haptic cane device (HCD) provided kinesthetic perception and a vibrotactile feedback device (VFD) provided tactile cue on the paretic leg for gait modification. Balance, gait symmetry and muscle activity were analyzed to identify the benefits of utilizing the proposed system. When using kinesthetic cues, either alone or integrated with a tactile cue, an increase in the percentage of non-paretic peak activity in the paretic muscles was observed at the preferred gait speed (vastus medialis obliquus: p < 0.001, partial eta squared (η 2 ) = 0.954; semitendinosus p < 0.001, partial η 2 = 0.793) and increased gait speeds (vastus medialis obliquus: p < 0.001, partial η 2 = 0.881; semitendinosus p = 0.028, partial η 2 = 0.399). While using HCD and VFD (individual and integrated applications), subjects could walk at their preferred and increased gait speeds without disrupting trunk balance in the mediolateral direction. The temporal stance symmetry ratio was improved when using tactile cues, either alone or integrated with a kinesthetic cue, at their preferred gait speed (p < 0.001, partial η 2 = 0.702). When combining haptic cues, the subjects walked at their preferred gait speed with increased temporal stance symmetry and paretic muscle activity affecting their balance. Similar improvements were observed at higher gait speeds. The efficacy of the proposed system is influenced by gait speed. Improvements were observed at a 20% increased gait speed, whereas, a plateau effect was observed at a 40% increased gait speed. These results imply that integration of haptic cues may benefit post-stroke gait rehabilitation by inducing simultaneous improvements in gait symmetry and muscle activity.
Referenceless Phase Holography for 3D Imaging
NASA Astrophysics Data System (ADS)
Kreis, Thomas
2017-12-01
Referenceless phase holography generates the full (amplitude and phase) optical field if intensity and phase of this field are given as numerical data. It is based on the interference of two pure phase fields which are produced by reflection of two mutually coherent plane waves at two phase modulating spatial light modulators of the liquid crystal on silicon type. Thus any optical field of any real or artificial 3D scene can be displayed. This means that referenceless phase holography is a promising method for future 3D imaging, e. g. in 3D television. The paper introduces the theory of the method and presents three possible interferometer arrangements, for the first time the Mach-Zehnder and the grating interferometer adapted to this application. The possibilities and problems in calculating the diffraction fields of given 3D scenes are worked out, the best choice and modifications of the algorithms are given. Several novelty experimental examples are given proving the 3D cues depth of field, occlusion and parallax. The benefits and advantages over other holographic approaches are pointed out, open problems and necessary technological developments as well as possibilities and future prospects are discussed.
Development of a Novel Locomotion Algorithm for Snake Robot
NASA Astrophysics Data System (ADS)
Khan, Raisuddin; Masum Billah, Md; Watanabe, Mitsuru; Shafie, A. A.
2013-12-01
A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation.
NASA Astrophysics Data System (ADS)
Bruschetta, M.; Maran, F.; Beghi, A.
2017-06-01
The use of dynamic driving simulators is constantly increasing in the automotive community, with applications ranging from vehicle development to rehab and driver training. The effectiveness of such devices is related to their capabilities of well reproducing the driving sensations, hence it is crucial that the motion control strategies generate both realistic and feasible inputs to the platform. Such strategies are called motion cueing algorithms (MCAs). In recent years several MCAs based on model predictive control (MPC) techniques have been proposed. The main drawback associated with the use of MPC is its computational burden, that may limit their application to high performance dynamic simulators. In the paper, a fast, real-time implementation of an MPC-based MCA for 9 DOF, high performance platform is proposed. Effectiveness of the approach in managing the available working area is illustrated by presenting experimental results from an implementation on a real device with a 200 Hz control frequency.
Graphics-based intelligent search and abstracting using Data Modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.
2002-11-01
This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.
Stall Recovery Guidance Algorithms Based on Constrained Control Approaches
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Kaneshige, John; Acosta, Diana
2016-01-01
Aircraft loss-of-control, in particular approach to stall or fully developed stall, is a major factor contributing to aircraft safety risks, which emphasizes the need to develop algorithms that are capable of assisting the pilots to identify the problem and providing guidance to recover the aircraft. In this paper we present several stall recovery guidance algorithms, which are implemented in the background without interfering with flight control system and altering the pilot's actions. They are using input and state constrained control methods to generate guidance signals, which are provided to the pilot in the form of visual cues. It is the pilot's decision to follow these signals. The algorithms are validated in the pilot-in-the loop medium fidelity simulation experiment.
Modification of YAPE keypoint detection algorithm for wide local contrast range images
NASA Astrophysics Data System (ADS)
Lukoyanov, A.; Nikolaev, D.; Konovalenko, I.
2018-04-01
Keypoint detection is an important tool of image analysis, and among many contemporary keypoint detection algorithms YAPE is known for its computational performance, allowing its use in mobile and embedded systems. One of its shortcomings is high sensitivity to local contrast which leads to high detection density in high-contrast areas while missing detections in low-contrast ones. In this work we study the contrast sensitivity of YAPE and propose a modification which compensates for this property on images with wide local contrast range (Yet Another Contrast-Invariant Point Extractor, YACIPE). As a model example, we considered the traffic sign recognition problem, where some signs are well-lighted, whereas others are in shadows and thus have low contrast. We show that the number of traffic signs on the image of which has not been detected any keypoints is 40% less for the proposed modification compared to the original algorithm.
Gürün, O O; Fatouros, P P; Kuhn, G M; de Paredes, E S
2001-04-01
We report on some extensions and further developments of a well-known microcalcification detection algorithm based on adaptive noise equalization. Tissue equivalent phantom images with and without labeled microcalcifications were subjected to this algorithm, and analyses of results revealed some shortcomings in the approach. Particularly, it was observed that the method of estimating the width of distributions in the feature space was based on assumptions which resulted in the loss of similarity preservation characteristics. A modification involving a change of estimator statistic was made, and the modified approach was tested on the same phantom images. Other modifications for improving detectability such as downsampling and use of alternate local contrast filters were also tested. The results indicate that these modifications yield improvements in detectability, while extending the generality of the approach. Extensions to real mammograms and further directions of research are discussed.
Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.
Ricci, E; Di Domenico, S; Cianca, E; Rossi, T
2015-01-01
Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.
González-Recio, O; Jiménez-Montero, J A; Alenda, R
2013-01-01
In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy and bias. This modification may be used to speed the calculus of genome-assisted evaluation in large data sets such us those obtained from consortiums. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M
2015-01-01
Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.
Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation
NASA Technical Reports Server (NTRS)
Peterka, Robert J.
1994-01-01
The objective of this proposal is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness. Results of experimentation are summarized and modifications to a two-axis rotation device are described. Abstracts of a number of papers generated during the reporting period are appended.
Maternal Regulation of Estrogen Receptor α Methylation
Champagne, Frances A.; Curley, James P.
2008-01-01
Summary Advances in molecular biology have provided tools for studying the epigenetic factors which modulate gene expression. DNA methylation is an epigenetic modification which can have sustained effects on transcription and is associated with long-term gene silencing. In this review, we focus on the regulation of estrogen receptor alpha (ERα) expression by hormonal and environmental cues, the consequences of these cues for female maternal and sexual behavior and recent studies which explore the role of DNA methylation in mediating these developmental effects, with particular focus on the mediating role of maternal care. The methylation status of ERα has implications for reproductive behavior, cancer susceptibility and recovery from ischemic injury suggesting an epigenetic basis for risk and resilience across the life span. PMID:18644464
NASA Astrophysics Data System (ADS)
Amalia; Budiman, M. A.; Sitepu, R.
2018-03-01
Cryptography is one of the best methods to keep the information safe from security attack by unauthorized people. At present, Many studies had been done by previous researchers to generate a more robust cryptographic algorithm to provide high security for data communication. To strengthen data security, one of the methods is hybrid cryptosystem method that combined symmetric and asymmetric algorithm. In this study, we observed a hybrid cryptosystem method contain Modification Playfair Cipher 16x16 algorithm as a symmetric algorithm and Knapsack Naccache-Stern as an asymmetric algorithm. We observe a running time of this hybrid algorithm with some of the various experiments. We tried different amount of characters to be tested which are 10, 100, 1000, 10000 and 100000 characters and we also examined the algorithm with various key’s length which are 10, 20, 30, 40 of key length. The result of our study shows that the processing time for encryption and decryption process each algorithm is linearly proportional, it means the longer messages character then, the more significant times needed to encrypt and decrypt the messages. The encryption running time of Knapsack Naccache-Stern algorithm takes a longer time than its decryption, while the encryption running time of modification Playfair Cipher 16x16 algorithm takes less time than its decryption.
Magnified gradient function with deterministic weight modification in adaptive learning.
Ng, Sin-Chun; Cheung, Chi-Chung; Leung, Shu-Hung
2004-11-01
This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning
Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.
2017-01-01
Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359
77 FR 13564 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
.... Government and contractor technical assistance and other related logistics support. \\*\\ as defined in Section... the ability to integrate the Helmet Mounted Cueing System. The software algorithms are the most sensitive portion of the AIM-9X-2 missile. The software continues to be modified via a pre- planned product...
77 FR 65185 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-25
... million Total $60 million (iii) Description and Quantity or Quantities of Articles or Services under... Technology Contained in the Defense Article or Defense Services Proposed to be Sold: See Annex attached... integrate the Helmet Mounted Cueing System. The software algorithms are the most sensitive portion of the...
78 FR 62600 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... million Total $68 million (iii) Description and Quantity or Quantities of Articles or Services Under... Technology Contained in the Defense Article or Defense Services Proposed To Be Sold: See Annex attached... integrate the Helmet Mounted Cueing System. The software algorithms are the most sensitive portion of the...
JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)
NASA Astrophysics Data System (ADS)
Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.
2014-12-01
JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.
Pan, Wei-Xing; Schmidt, Robert; Wickens, Jeffery R; Hyland, Brian I
2005-06-29
Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments.
Open-pNovo: De Novo Peptide Sequencing with Thousands of Protein Modifications.
Yang, Hao; Chi, Hao; Zhou, Wen-Jing; Zeng, Wen-Feng; He, Kun; Liu, Chao; Sun, Rui-Xiang; He, Si-Min
2017-02-03
De novo peptide sequencing has improved remarkably, but sequencing full-length peptides with unexpected modifications is still a challenging problem. Here we present an open de novo sequencing tool, Open-pNovo, for de novo sequencing of peptides with arbitrary types of modifications. Although the search space increases by ∼300 times, Open-pNovo is close to or even ∼10-times faster than the other three proposed algorithms. Furthermore, considering top-1 candidates on three MS/MS data sets, Open-pNovo can recall over 90% of the results obtained by any one traditional algorithm and report 5-87% more peptides, including 14-250% more modified peptides. On a high-quality simulated data set, ∼85% peptides with arbitrary modifications can be recalled by Open-pNovo, while hardly any results can be recalled by others. In summary, Open-pNovo is an excellent tool for open de novo sequencing and has great potential for discovering unexpected modifications in the real biological applications.
Nanoscale Surface Modifications of Orthopaedic Implants: State of the Art and Perspectives
Staruch, RMT; Griffin, MF; Butler, PEM
2016-01-01
Background: Orthopaedic implants such as the total hip or total knee replacement are examples of surgical interventions with postoperative success rates of over 90% at 10 years. Implant failure is associated with wear particles and pain that requires surgical revision. Improving the implant - bone surface interface is a key area for biomaterial research for future clinical applications. Current implants utilise mechanical, chemical or physical methods for surface modification. Methods: A review of all literature concerning the nanoscale surface modification of orthopaedic implant technology was conducted. Results: The techniques and fabrication methods of nanoscale surface modifications are discussed in detail, including benefits and potential pitfalls. Future directions for nanoscale surface technology are explored. Conclusion: Future understanding of the role of mechanical cues and protein adsorption will enable greater flexibility in surface control. The aim of this review is to investigate and summarise the current concepts and future directions for controlling the implant nanosurface to improve interactions. PMID:28217214
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-09-11
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
Ihssen, Niklas; Sokunbi, Moses O; Lawrence, Andrew D; Lawrence, Natalia S; Linden, David E J
2017-06-01
FMRI-based neurofeedback transforms functional brain activation in real-time into sensory stimuli that participants can use to self-regulate brain responses, which can aid the modification of mental states and behavior. Emerging evidence supports the clinical utility of neurofeedback-guided up-regulation of hypoactive networks. In contrast, down-regulation of hyperactive neural circuits appears more difficult to achieve. There are conditions though, in which down-regulation would be clinically useful, including dysfunctional motivational states elicited by salient reward cues, such as food or drug craving. In this proof-of-concept study, 10 healthy females (mean age = 21.40 years, mean BMI = 23.53) who had fasted for 4 h underwent a novel 'motivational neurofeedback' training in which they learned to down-regulate brain activation during exposure to appetitive food pictures. FMRI feedback was given from individually determined target areas and through decreases/increases in food picture size, thus providing salient motivational consequences in terms of cue approach/avoidance. Our preliminary findings suggest that motivational neurofeedback is associated with functionally specific activation decreases in diverse cortical/subcortical regions, including key motivational areas. There was also preliminary evidence for a reduction of hunger after neurofeedback and an association between down-regulation success and the degree of hunger reduction. Decreasing neural cue responses by motivational neurofeedback may provide a useful extension of existing behavioral methods that aim to modulate cue reactivity. Our pilot findings indicate that reduction of neural cue reactivity is not achieved by top-down regulation but arises in a bottom-up manner, possibly through implicit operant shaping of target area activity.
Derivative Free Gradient Projection Algorithms for Rotation
ERIC Educational Resources Information Center
Jennrich, Robert I.
2004-01-01
A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…
Epigenetic Regulation of Myeloid Cells
IVASHKIV, LIONEL B.; PARK, SUNG HO
2017-01-01
Epigenetic regulation in myeloid cells is crucial for cell differentiation and activation in response to developmental and environmental cues. Epigenetic control involves posttranslational modification of DNA or chromatin, and is also coupled to upstream signaling pathways and transcription factors. In this review, we summarize key epigenetic events and how dynamics in the epigenetic landscape of myeloid cells shape the development, immune activation, and innate immune memory. PMID:27337441
A generalized method for multiple robotic manipulator programming applied to vertical-up welding
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh
1991-01-01
The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.
Processing large remote sensing image data sets on Beowulf clusters
Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail
2003-01-01
High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.
78 FR 703 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-04
... Value: Major Defense Equipment $110 million Other $30 million Total $140 million * as defined in Section... Cueing System. The software algorithms are the most sensitive portion of the AIM-9X-2 missile. The software continues to be modified via a pre- planned product improvement (P\\3\\I) program in order to...
76 FR 72180 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-22
...) Description and Quantity or Quantities of Articles or Services under Consideration for Purchase: 20 AIM-9X-2.... (vii) Sensitivity of Technology Contained in the Defense Article or Defense Services Proposed to be... Helmet Mounted Cueing System. The software algorithms are the most sensitive portion of the AIM-9X-2...
Control of a haptic gear shifting assistance device utilizing a magnetorheological clutch
NASA Astrophysics Data System (ADS)
Han, Young-Min; Choi, Seung-Bok
2014-10-01
This paper proposes a haptic clutch driven gear shifting assistance device that can help when the driver shifts the gear of a transmission system. In order to achieve this goal, a magnetorheological (MR) fluid-based clutch is devised to be capable of the rotary motion of an accelerator pedal to which the MR clutch is integrated. The proposed MR clutch is then manufactured, and its transmission torque is experimentally evaluated according to the magnetic field intensity. The manufactured MR clutch is integrated with the accelerator pedal to transmit a haptic cue signal to the driver. The impending control issue is to cue the driver to shift the gear via the haptic force. Therefore, a gear-shifting decision algorithm is constructed by considering the vehicle engine speed concerned with engine combustion dynamics, vehicle dynamics and driving resistance. Then, the algorithm is integrated with a compensation strategy for attaining the desired haptic force. In this work, the compensator is also developed and implemented through the discrete version of the inverse hysteretic model. The control performances, such as the haptic force tracking responses and fuel consumption, are experimentally evaluated.
Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min
2015-11-03
Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
The formation and distribution of hippocampal synapses on patterned neuronal networks
NASA Astrophysics Data System (ADS)
Dowell-Mesfin, Natalie M.
Communication within the central nervous system is highly orchestrated with neurons forming trillions of specialized junctions called synapses. In vivo, biochemical and topographical cues can regulate neuronal growth. Biochemical cues also influence synaptogenesis and synaptic plasticity. The effects of topography on the development of synapses have been less studied. In vitro, neuronal growth is unorganized and complex making it difficult to study the development of networks. Patterned topographical cues guide and control the growth of neuronal processes (axons and dendrites) into organized networks. The aim of this dissertation was to determine if patterned topographical cues can influence synapse formation and distribution. Standard fabrication and compression molding procedures were used to produce silicon masters and polystyrene replicas with topographical cues presented as 1 mum high pillars with diameters of 0.5 and 2.0 mum and gaps of 1.0 to 5.0 mum. Embryonic rat hippocampal neurons grown unto patterned surfaces. A developmental analysis with immunocytochemistry was used to assess the distribution of pre- and post-synaptic proteins. Activity-dependent pre-synaptic vesicle uptake using functional imaging dyes was also performed. Adaptive filtering computer algorithms identified synapses by segmenting juxtaposed pairs of pre- and post-synaptic labels. Synapse number and area were automatically extracted from each deconvolved data set. In addition, neuronal processes were traced automatically to assess changes in synapse distribution. The results of these experiments demonstrated that patterned topographic cues can induce organized and functional neuronal networks that can serve as models for the study of synapse formation and plasticity as well as for the development of neuroprosthetic devices.
Modifying a numerical algorithm for solving the matrix equation X + AX T B = C
NASA Astrophysics Data System (ADS)
Vorontsov, Yu. O.
2013-06-01
Certain modifications are proposed for a numerical algorithm solving the matrix equation X + AX T B = C. By keeping the intermediate results in storage and repeatedly using them, it is possible to reduce the total complexity of the algorithm from O( n 4) to O( n 3) arithmetic operations.
An Efficient Algorithm for TUCKALS3 on Data with Large Numbers of Observation Units.
ERIC Educational Resources Information Center
Kiers, Henk A. L.; And Others
1992-01-01
A modification of the TUCKALS3 algorithm is proposed that handles three-way arrays of order I x J x K for any I. The reduced work space needed for storing data and increased execution speed make the modified algorithm very suitable for use on personal computers. (SLD)
Shape from texture: an evaluation of visual cues
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Hildebrand, Axel
1994-05-01
In this paper an integrated approach is presented to understand and control the influence of texture on shape perception. Following Gibson's hypotheses, which states that texture is a mathematically and psychological sufficient stimulus for surface perception, we evaluate different perceptual cues. Starting out from a perception-based texture classification introduced by Tamura et al., we build up a uniform sampled parameter space. For the synthesis of some of our textures we use the texture description language HiLDTe. To acquire the desired texture specification we take advantage of a genetic algorithm. Employing these textures we practice a number of psychological tests to evaluate the significance of the different texture features. A comprehension of the results derived from the psychological tests is done to constitute new shape analyzing techniques. Since the vanishing point seems to be an important visual cue we introduce the Hough transform. A prospective of future work within the field of visual computing is provided within the final section.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-01-01
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
An algorithm to count the number of repeated patient data entries with B tree.
Okada, M; Okada, M
1985-04-01
An algorithm to obtain the number of different values that appear a specified number of times in a given data field of a given data file is presented. Basically, a well-known B-tree structure is employed in this study. Some modifications were made to the basic B-tree algorithm. The first step of the modifications is to allow a data item whose values are not necessary distinct from one record to another to be used as a primary key. When a key value is inserted, the number of previous appearances is counted. At the end of all the insertions, the number of key values which are unique in the tree, the number of key values which appear twice, three times, and so forth are obtained. This algorithm is especially powerful for a large size file in disk storage.
A novel speech watermarking algorithm by line spectrum pair modification
NASA Astrophysics Data System (ADS)
Zhang, Qian; Yang, Senbin; Chen, Guang; Zhou, Jun
2011-10-01
To explore digital watermarking specifically suitable for the speech domain, this paper experimentally investigates the properties of line spectrum pair (LSP) parameters firstly. The results show that the differences between contiguous LSPs are robust against common signal processing operations and small modifications of LSPs are imperceptible to the human auditory system (HAS). According to these conclusions, three contiguous LSPs of a speech frame are selected to embed a watermark bit. The middle LSP is slightly altered to modify the differences of these LSPs when embedding watermark. Correspondingly, the watermark is extracted by comparing these differences. The proposed algorithm's transparency is adjustable to meet the needs of different applications. The algorithm has good robustness against additive noise, quantization, amplitude scale and MP3 compression attacks, for the bit error rate (BER) is less than 5%. In addition, the algorithm allows a relatively low capacity, which approximates to 50 bps.
Doubling down on phosphorylation as a variable peptide modification.
Cooper, Bret
2016-09-01
Some mass spectrometrists believe that searching for variable PTMs like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false-positive matching. The basis for this is the premise that the algorithm compares a spectrum to both a nonphosphorylated peptide candidate and a phosphorylated candidate, which is double the number of candidates compared to a search with no possible phosphorylation. Hence, if the search space doubles, false-positive matching could increase accordingly as the algorithm considers more candidates to which false matches could be made. In this study, it is shown that the search for variable phosphoserine and phosphothreonine modifications does not always double the search space or unduly impinge upon the FDR. A breakdown of how one popular database-search algorithm deals with variable phosphorylation is presented. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Segura, Diego F; Nussenbaum, Ana L; Viscarret, Mariana M; Devescovi, Francisco; Bachmann, Guillermo E; Corley, Juan C; Ovruski, Sergio M; Cladera, Jorge L
2016-01-01
Parasitoids searching for polyphagous herbivores can find their hosts in a variety of habitats. Under this scenario, chemical cues from the host habitat (not related to the host) represent poor indicators of host location. Hence, it is unlikely that naïve females show a strong response to host habitat cues, which would become important only if the parasitoids learn to associate such cues to the host presence. This concept does not consider that habitats can vary in profitability or host nutritional quality, which according to the optimal foraging theory and the preference-performance hypothesis (respectively) could shape the way in which parasitoids make use of chemical cues from the host habitat. We assessed innate preference in the fruit fly parasitoid Diachasmimorpha longicaudata among chemical cues from four host habitats (apple, fig, orange and peach) using a Y-tube olfactometer. Contrary to what was predicted, we found a hierarchic pattern of preference. The parasitism rate realized on these fruit species and the weight of the host correlates positively, to some extent, with the preference pattern, whereas preference did not correlate with survival and fecundity of the progeny. As expected for a parasitoid foraging for generalist hosts, habitat preference changed markedly depending on their previous experience and the abundance of hosts. These findings suggest that the pattern of preference for host habitats is attributable to differences in encounter rate and host quality. Host habitat preference seems to be, however, quite plastic and easily modified according to the information obtained during foraging.
Segura, Diego F.; Nussenbaum, Ana L.; Viscarret, Mariana M.; Devescovi, Francisco; Bachmann, Guillermo E.; Corley, Juan C.; Ovruski, Sergio M.; Cladera, Jorge L.
2016-01-01
Parasitoids searching for polyphagous herbivores can find their hosts in a variety of habitats. Under this scenario, chemical cues from the host habitat (not related to the host) represent poor indicators of host location. Hence, it is unlikely that naïve females show a strong response to host habitat cues, which would become important only if the parasitoids learn to associate such cues to the host presence. This concept does not consider that habitats can vary in profitability or host nutritional quality, which according to the optimal foraging theory and the preference-performance hypothesis (respectively) could shape the way in which parasitoids make use of chemical cues from the host habitat. We assessed innate preference in the fruit fly parasitoid Diachasmimorpha longicaudata among chemical cues from four host habitats (apple, fig, orange and peach) using a Y-tube olfactometer. Contrary to what was predicted, we found a hierarchic pattern of preference. The parasitism rate realized on these fruit species and the weight of the host correlates positively, to some extent, with the preference pattern, whereas preference did not correlate with survival and fecundity of the progeny. As expected for a parasitoid foraging for generalist hosts, habitat preference changed markedly depending on their previous experience and the abundance of hosts. These findings suggest that the pattern of preference for host habitats is attributable to differences in encounter rate and host quality. Host habitat preference seems to be, however, quite plastic and easily modified according to the information obtained during foraging. PMID:27007298
Parameter Estimation for a Hybrid Adaptive Flight Controller
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje
2009-01-01
This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.
Limited distortion in LSB steganography
NASA Astrophysics Data System (ADS)
Kim, Younhee; Duric, Zoran; Richards, Dana
2006-02-01
It is well known that all information hiding methods that modify the least significant bits introduce distortions into the cover objects. Those distortions have been utilized by steganalysis algorithms to detect that the objects had been modified. It has been proposed that only coefficients whose modification does not introduce large distortions should be used for embedding. In this paper we propose an effcient algorithm for information hiding in the LSBs of JPEG coefficients. Our algorithm uses parity coding to choose the coefficients whose modifications introduce minimal additional distortion. We derive the expected value of the additional distortion as a function of the message length and the probability distribution of the JPEG quantization errors of cover images. Our experiments show close agreement between the theoretical prediction and the actual additional distortion.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
NASA Astrophysics Data System (ADS)
Hertono, G. F.; Ubadah; Handari, B. D.
2018-03-01
The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilaly, A.K.; Sikdar, S.K.
In this study, the authors introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appropriate waste reduction tasks at designated levels of the hierarchy. A sensitivity coefficient was used to measure the relative impacts of process variables on the pollution index of a process. The use of the WAR algorithm was demonstrated by a fermentation process for making penicillin.
Speaker Recognition Through NLP and CWT Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown-VanHoozer, S.A.; Kercel, S.W.; Tucker, R.W.
The objective of this research is to develop a system capable of identifying speakers on wiretaps from a large database (>500 speakers) with a short search time duration (<30 seconds), and with better than 90% accuracy. Much previous research in speaker recognition has led to algorithms that produced encouraging preliminary results, but were overwhelmed when applied to populations of more than a dozen or so different speakers. The authors are investigating a solution to the "large population" problem by seeking two completely different kinds of characterizing features. These features are he techniques of Neuro-Linguistic Programming (NLP) and the continuous waveletmore » transform (CWT). NLP extracts precise neurological, verbal and non-verbal information, and assimilates the information into useful patterns. These patterns are based on specific cues demonstrated by each individual, and provide ways of determining congruency between verbal and non-verbal cues. The primary NLP modalities are characterized through word spotting (or verbal predicates cues, e.g., see, sound, feel, etc.) while the secondary modalities would be characterized through the speech transcription used by the individual. This has the practical effect of reducing the size of the search space, and greatly speeding up the process of identifying an unknown speaker. The wavelet-based line of investigation concentrates on using vowel phonemes and non-verbal cues, such as tempo. The rationale for concentrating on vowels is there are a limited number of vowels phonemes, and at least one of them usually appears in even the shortest of speech segments. Using the fast, CWT algorithm, the details of both the formant frequency and the glottal excitation characteristics can be easily extracted from voice waveforms. The differences in the glottal excitation waveforms as well as the formant frequency are evident in the CWT output. More significantly, the CWT reveals significant detail of the glottal excitation waveform.« less
Speaker recognition through NLP and CWT modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown-VanHoozer, A.; Kercel, S. W.; Tucker, R. W.
The objective of this research is to develop a system capable of identifying speakers on wiretaps from a large database (>500 speakers) with a short search time duration (<30 seconds), and with better than 90% accuracy. Much previous research in speaker recognition has led to algorithms that produced encouraging preliminary results, but were overwhelmed when applied to populations of more than a dozen or so different speakers. The authors are investigating a solution to the ''huge population'' problem by seeking two completely different kinds of characterizing features. These features are extracted using the techniques of Neuro-Linguistic Programming (NLP) and themore » continuous wavelet transform (CWT). NLP extracts precise neurological, verbal and non-verbal information, and assimilates the information into useful patterns. These patterns are based on specific cues demonstrated by each individual, and provide ways of determining congruency between verbal and non-verbal cues. The primary NLP modalities are characterized through word spotting (or verbal predicates cues, e.g., see, sound, feel, etc.) while the secondary modalities would be characterized through the speech transcription used by the individual. This has the practical effect of reducing the size of the search space, and greatly speeding up the process of identifying an unknown speaker. The wavelet-based line of investigation concentrates on using vowel phonemes and non-verbal cues, such as tempo. The rationale for concentrating on vowels is there are a limited number of vowels phonemes, and at least one of them usually appears in even the shortest of speech segments. Using the fast, CWT algorithm, the details of both the formant frequency and the glottal excitation characteristics can be easily extracted from voice waveforms. The differences in the glottal excitation waveforms as well as the formant frequency are evident in the CWT output. More significantly, the CWT reveals significant detail of the glottal excitation waveform.« less
Place learning overrides innate behaviors in Drosophila.
Baggett, Vincent; Mishra, Aditi; Kehrer, Abigail L; Robinson, Abbey O; Shaw, Paul; Zars, Troy
2018-03-01
Animals in a natural environment confront many sensory cues. Some of these cues bias behavioral decisions independent of experience, and action selection can reveal a stimulus-response (S-R) connection. However, in a changing environment it would be a benefit for an animal to update behavioral action selection based on experience, and learning might modify even strong S-R relationships. How animals use learning to modify S-R relationships is a largely open question. Three sensory stimuli, air, light, and gravity sources were presented to individual Drosophila melanogaster in both naïve and place conditioning situations. Flies were tested for a potential modification of the S-R relationships of anemotaxis, phototaxis, and negative gravitaxis by a contingency that associated place with high temperature. With two stimuli, significant S-R relationships were abandoned when the cue was in conflict with the place learning contingency. The role of the dunce ( dnc ) cAMP-phosphodiesterase and the rutabaga ( rut ) adenylyl cyclase were examined in all conditions. Both dnc 1 and rut 2080 mutant flies failed to display significant S-R relationships with two attractive cues, and have characteristically lower conditioning scores under most conditions. Thus, learning can have profound effects on separate native S-R relationships in multiple contexts, and mutation of the dnc and rut genes reveal complex effects on behavior. © 2018 Baggett et al.; Published by Cold Spring Harbor Laboratory Press.
Petit, Christophe; Le Ru, Bruno; Dupas, Stéphane; Frérot, Brigitte; Ahuya, Peter; Kaiser-Arnauld, Laure; Harry, Myriam; Calatayud, Paul-André
2015-01-01
In Lepidoptera, host plant selection is first conditioned by oviposition site preference of adult females followed by feeding site preference of larvae. Dietary experience to plant volatile cues can induce larval and adult host plant preference. We investigated how the parent’s and self-experience induce host preference in adult females and larvae of three lepidopteran stem borer species with different host plant ranges, namely the polyphagous Sesamia nonagrioides, the oligophagous Busseola fusca and the monophagous Busseola nairobica, and whether this induction can be linked to a neurophysiological phenotypic plasticity. The three species were conditioned to artificial diet enriched with vanillin from the neonate larvae to the adult stage during two generations. Thereafter, two-choice tests on both larvae and adults using a Y-tube olfactometer and electrophysiological (electroantennography [EAG] recordings) experiments on adults were carried out. In the polyphagous species, the induction of preference for a new olfactory cue (vanillin) by females and 3rd instar larvae was determined by parents’ and self-experiences, without any modification of the sensitivity of the females antennae. No preference induction was found in the oligophagous and monophagous species. Our results suggest that lepidopteran stem borers may acquire preferences for new olfactory cues from the larval to the adult stage as described by Hopkins’ host selection principle (HHSP), neo-Hopkins’ principle, and the concept of ‘chemical legacy.’ PMID:26288070
MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION
In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...
Kastberger, Gerald; Maurer, Michael; Weihmann, Frank; Ruether, Matthias; Hoetzl, Thomas; Kranner, Ilse; Bischof, Horst
2011-02-08
The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters.
2011-01-01
Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters. PMID:21303539
Residuals-Based Subgraph Detection with Cue Vertices
2015-11-30
Workshop, 2012, pp. 129–132. [5] M. E. J. Newman , “Finding community structure in networks using the eigenvectors of matrices,” Phys. Rev. E, vol. 74, no...from Data, vol. 1, no. 1, 2007. [7] M. W. Mahoney , L. Orecchia, and N. K. Vishnoi, “A spectral algorithm for improving graph partitions,” CoRR, vol. abs
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
González, Felisa; Quinn, Jennifer J; Fanselow, Michael S
2003-01-01
Rats were conditioned across 2 consecutive days where a single unsignaled footshock was presented in the presence of specific contextual cues. Rats were tested with contexts that had additional stimulus components either added or subtracted. Using freezing as a measure of conditioning, removal but not addition of a cue from the training context produced significant generalization decrement. The results are discussed in relation to the R. A. Rescorla and A. R. Wagner (1972), J. M. Pearce (1994), and A. R. Wagner and S. E. Brandon (2001) accounts of generalization. Although the present data are most consistent with elemental models such as Rescorla and Wagner, a slight modification of the Wagner-Brandon replaced-elements model that can account for differences in the pattern of generalization obtained with contexts and discrete conditional stimuli is proposed.
Using Mathematical Algorithms to Modify Glomerular Filtration Rate Estimation Equations
Zhu, Bei; Wu, Jianqing; Zhu, Jin; Zhao, Weihong
2013-01-01
Background The equations provide a rapid and low-cost method of evaluating glomerular filtration rate (GFR). Previous studies indicated that the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease-Epidemiology (CKD-EPI) and MacIsaac equations need further modification for application in Chinese population. Thus, this study was designed to modify the three equations, and compare the diagnostic accuracy of the equations modified before and after. Methodology With the use of 99 mTc-DTPA renal dynamic imaging as the reference GFR (rGFR), the MDRD, CKD-EPI and MacIsaac equations were modified by two mathematical algorithms: the hill-climbing and the simulated-annealing algorithms. Results A total of 703 Chinese subjects were recruited, with the average rGFR 77.14±25.93 ml/min. The entire modification process was based on a random sample of 80% of subjects in each GFR level as a training sample set, the rest of 20% of subjects as a validation sample set. After modification, the three equations performed significant improvement in slop, intercept, correlated coefficient, root mean square error (RMSE), total deviation index (TDI), and the proportion of estimated GFR (eGFR) within 10% and 30% deviation of rGFR (P10 and P30). Of the three modified equations, the modified CKD-EPI equation showed the best accuracy. Conclusions Mathematical algorithms could be a considerable tool to modify the GFR equations. Accuracy of all the three modified equations was significantly improved in which the modified CKD-EPI equation could be the optimal one. PMID:23472113
Prediction Of The Expected Safety Performance Of Rural Two-Lane Highways
DOT National Transportation Integrated Search
2000-12-01
This report presents an algorithm for predicting the safety performance of a rural two-lane highway. The accident prediction algorithm consists of base models and accident modification factors for both roadway segments and at-grade intersections on r...
Enhancements to AERMOD's building downwash algorithms based on wind-tunnel and Embedded-LES modeling
NASA Astrophysics Data System (ADS)
Monbureau, E. M.; Heist, D. K.; Perry, S. G.; Brouwer, L. H.; Foroutan, H.; Tang, W.
2018-04-01
Knowing the fate of effluent from an industrial stack is important for assessing its impact on human health. AERMOD is one of several Gaussian plume models containing algorithms to evaluate the effect of buildings on the movement of the effluent from a stack. The goal of this study is to improve AERMOD's ability to accurately model important and complex building downwash scenarios by incorporating knowledge gained from a recently completed series of wind tunnel studies and complementary large eddy simulations of flow and dispersion around simple structures for a variety of building dimensions, stack locations, stack heights, and wind angles. This study presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD's building pre-processor to better represent elongated buildings in oblique winds. These modifications are demonstrated to improve the ability of AERMOD to model observed ground-level concentrations in the vicinity of a building for the variety of conditions examined in the wind tunnel and numerical studies.
Doubling down on peptide phosphorylation as a variable mass modification
USDA-ARS?s Scientific Manuscript database
Some mass spectrometrists believe that searching for variable post-translational modifications like phosphorylation of serine or threonine when using database-search algorithms to interpret peptide tandem mass spectra will increase false positive rates. The basis for this is the premise that the al...
NASA Astrophysics Data System (ADS)
Ohn-Bar, Eshed; Martin, Sujitha; Trivedi, Mohan Manubhai
2013-10-01
We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.
Inspection design using 2D phased array, TFM and cueMAP software
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGilp, Ailidh; Dziewierz, Jerzy; Lardner, Tim
2014-02-18
A simulation suite, cueMAP, has been developed to facilitate the design of inspection processes and sparse 2D array configurations. At the core of cueMAP is a Total Focusing Method (TFM) imaging algorithm that enables computer assisted design of ultrasonic inspection scenarios, including the design of bespoke array configurations to match the inspection criteria. This in-house developed TFM code allows for interactive evaluation of image quality indicators of ultrasonic imaging performance when utilizing a 2D phased array working in FMC/TFM mode. The cueMAP software uses a series of TFM images to build a map of resolution, contrast and sensitivity of imagingmore » performance of a simulated reflector, swept across the inspection volume. The software takes into account probe properties, wedge or water standoff, and effects of specimen curvature. In the validation process of this new software package, two 2D arrays have been evaluated on 304n stainless steel samples, typical of the primary circuit in nuclear plants. Thick section samples have been inspected using a 1MHz 2D matrix array. Due to the processing efficiency of the software, the data collected from these array configurations has been used to investigate the influence sub-aperture operation on inspection performance.« less
Application of modified Martinez-Silva algorithm in determination of net cover
NASA Astrophysics Data System (ADS)
Stefanowicz, Łukasz; Grobelna, Iwona
2016-12-01
In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.
In this study, we introduced several modifications to the WAR (waste reduction) algorithm developed earlier. These modifications were made for systematically handling sensitivity analysis and various tasks of waste minimization. A design hierarchy was formulated to promote appro...
ERIC Educational Resources Information Center
Ural, A. Engin; Yuret, Deniz; Ketrez, F. Nihan; Kocbas, Dilara; Kuntay, Aylin C.
2009-01-01
The syntactic bootstrapping mechanism of verb learning was evaluated against child-directed speech in Turkish, a language with rich morphology, nominal ellipsis and free word order. Machine-learning algorithms were run on transcribed caregiver speech directed to two Turkish learners (one hour every two weeks between 0;9 to 1;10) of different…
Testing algorithms for critical slowing down
NASA Astrophysics Data System (ADS)
Cossu, Guido; Boyle, Peter; Christ, Norman; Jung, Chulwoo; Jüttner, Andreas; Sanfilippo, Francesco
2018-03-01
We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.
Yun, Hee Young; Engelen, Aschwin H.; Santos, Rui O.; Molis, Markus
2012-01-01
Plants optimise their resistance to herbivores by regulating deterrent responses on demand. Induction of anti-herbivory defences can occur directly in grazed plants or from emission of risk cues to the environment, which modifies interactions of adjacent plants with, for instance, their consumers. This study confirmed the induction of anti-herbivory responses by water-borne risk cues between adjoining con-specific seaweeds and firstly examined whether plant-plant signalling also exists among adjacent hetero-specific seaweeds. Furthermore, differential abilities and geographic variation in plant-plant signalling by a non-indigenous seaweed as well as native seaweeds were assessed. Twelve-day induction experiments using the non-indigenous seaweed Sargassum muticum were conducted in the laboratory in Portugal and Germany with one local con-familiar (Portugal: Cystoseira humilis, Germany: Halidrys siliquosa) and hetero-familiar native species (Portugal: Fucus spiralis, Germany: F. vesiculosus). All seaweeds were grazed by a local isopod species (Portugal: Stenosoma nadejda, Germany: Idotea baltica) and were positioned upstream of con- and hetero-specific seaweeds. Grazing-induced modification in seaweed traits were tested in three-day feeding assays between cue-exposed and cue-free ( = control) pieces of both fresh and reconstituted seaweeds. Both Fucus species reduced their palatability when positioned downstream of isopod-grazed con-specifics. Yet, the palatability of non-indigenous S. muticum remained constant in the presence of upstream grazed con-specifics and native hetero-specifics. In contrast, both con-familiar (but neither hetero-familiar) native species reduced palatability when located downstream of grazed S. muticum. Similar patterns of grazer-deterrent responses to water-borne cues were observed on both European shores, and were almost identical between assays using fresh and reconstituted seaweeds. Hence, seaweeds may use plant-plant signalling to optimise chemical resistance to consumers, though this ability appeared to be species-specific. Furthermore, this study suggests that native species may benefit more than a non-indigenous species from water-borne cue mediated reduction in consumption as only natives responded to signals emitted by hetero-specifics. PMID:22701715
Yun, Hee Young; Engelen, Aschwin H; Santos, Rui O; Molis, Markus
2012-01-01
Plants optimise their resistance to herbivores by regulating deterrent responses on demand. Induction of anti-herbivory defences can occur directly in grazed plants or from emission of risk cues to the environment, which modifies interactions of adjacent plants with, for instance, their consumers. This study confirmed the induction of anti-herbivory responses by water-borne risk cues between adjoining con-specific seaweeds and firstly examined whether plant-plant signalling also exists among adjacent hetero-specific seaweeds. Furthermore, differential abilities and geographic variation in plant-plant signalling by a non-indigenous seaweed as well as native seaweeds were assessed. Twelve-day induction experiments using the non-indigenous seaweed Sargassum muticum were conducted in the laboratory in Portugal and Germany with one local con-familiar (Portugal: Cystoseira humilis, Germany: Halidrys siliquosa) and hetero-familiar native species (Portugal: Fucus spiralis, Germany: F. vesiculosus). All seaweeds were grazed by a local isopod species (Portugal: Stenosoma nadejda, Germany: Idotea baltica) and were positioned upstream of con- and hetero-specific seaweeds. Grazing-induced modification in seaweed traits were tested in three-day feeding assays between cue-exposed and cue-free ( = control) pieces of both fresh and reconstituted seaweeds. Both Fucus species reduced their palatability when positioned downstream of isopod-grazed con-specifics. Yet, the palatability of non-indigenous S. muticum remained constant in the presence of upstream grazed con-specifics and native hetero-specifics. In contrast, both con-familiar (but neither hetero-familiar) native species reduced palatability when located downstream of grazed S. muticum. Similar patterns of grazer-deterrent responses to water-borne cues were observed on both European shores, and were almost identical between assays using fresh and reconstituted seaweeds. Hence, seaweeds may use plant-plant signalling to optimise chemical resistance to consumers, though this ability appeared to be species-specific. Furthermore, this study suggests that native species may benefit more than a non-indigenous species from water-borne cue mediated reduction in consumption as only natives responded to signals emitted by hetero-specifics.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Improvements to Busquet's Non LTE algorithm in NRL's Hydro code
NASA Astrophysics Data System (ADS)
Klapisch, M.; Colombant, D.
1996-11-01
Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.
NASA Astrophysics Data System (ADS)
Wu, Chong; Liu, Liping; Wei, Ming; Xi, Baozhu; Yu, Minghui
2018-03-01
A modified hydrometeor classification algorithm (HCA) is developed in this study for Chinese polarimetric radars. This algorithm is based on the U.S. operational HCA. Meanwhile, the methodology of statistics-based optimization is proposed including calibration checking, datasets selection, membership functions modification, computation thresholds modification, and effect verification. Zhuhai radar, the first operational polarimetric radar in South China, applies these procedures. The systematic bias of calibration is corrected, the reliability of radar measurements deteriorates when the signal-to-noise ratio is low, and correlation coefficient within the melting layer is usually lower than that of the U.S. WSR-88D radar. Through modification based on statistical analysis of polarimetric variables, the localized HCA especially for Zhuhai is obtained, and it performs well over a one-month test through comparison with sounding and surface observations. The algorithm is then utilized for analysis of a squall line process on 11 May 2014 and is found to provide reasonable details with respect to horizontal and vertical structures, and the HCA results—especially in the mixed rain-hail region—can reflect the life cycle of the squall line. In addition, the kinematic and microphysical processes of cloud evolution and the differences between radar-detected hail and surface observations are also analyzed. The results of this study provide evidence for the improvement of this HCA developed specifically for China.
Okon-Singer, Hadas; Alyagon, Uri; Kofman, Ora; Tzelgov, Joseph; Henik, Avishai
2011-03-01
Despite research regarding emotional processing, it is still unclear whether fear-evoking stimuli are processed when they are irrelevant and when attention is oriented elsewhere. In this study, 63 healthy university students with high fear from snakes or spiders participated in two different experiments. In an emotional modification of the spatial cueing task, 31 subjects (5 males) were asked to detect a target letter while ignoring a neutral or fear-related distracting picture. The distribution of attention was independently manipulated by a spatial cue that preceded the appearance of the picture and the target letter. In an emotional modification of the cognitive load paradigm, 32 subjects (4 males) were asked to discriminate between two target letters, while ignoring a central neutral or fear-related picture, and additional 1, 3, or 5 distracting letters that created a varied attentional load. Fear-related pictures interfered with the performance of highly fearful participants, even when the pictures were presented outside the focus of attention and when the task taxed attentional resources. We suggest that highly fearful individuals process fear-related information automatically, either inattentively or with prioritized attention capture over competing items, leading to deteriorated cognitive performance. Different results were shown in healthy individuals while processing negative--but not phobic--pictures, suggesting that emotional processing depends on the fear value of the stimulus for a specific observer.
Parvez, Saba; Fu, Yuan; Li, Jiayang; Long, Marcus J C; Lin, Hong-Yu; Lee, Dustin K; Hu, Gene S; Aye, Yimon
2015-01-14
Lipid-derived electrophiles (LDEs) that can directly modify proteins have emerged as important small-molecule cues in cellular decision-making. However, because these diffusible LDEs can modify many targets [e.g., >700 cysteines are modified by the well-known LDE 4-hydroxynonenal (HNE)], establishing the functional consequences of LDE modification on individual targets remains devilishly difficult. Whether LDE modifications on a single protein are biologically sufficient to activate discrete redox signaling response downstream also remains untested. Herein, using T-REX (targetable reactive electrophiles and oxidants), an approach aimed at selectively flipping a single redox switch in cells at a precise time, we show that a modest level (∼34%) of HNEylation on a single target is sufficient to elicit the pharmaceutically important antioxidant response element (ARE) activation, and the resultant strength of ARE induction recapitulates that observed from whole-cell electrophilic perturbation. These data provide the first evidence that single-target LDE modifications are important individual events in mammalian physiology.
DESIGNING SUSTAINABLE PROCESSES WITH SIMULATION: THE WASTE REDUCTION (WAR) ALGORITHM
The WAR Algorithm, a methodology for determining the potential environmental impact (PEI) of a chemical process, is presented with modifications that account for the PEI of the energy consumed within that process. From this theory, four PEI indexes are used to evaluate the envir...
Dill: an algorithm and a symbolic software package for doing classical supersymmetry calculations
NASA Astrophysics Data System (ADS)
Luc̆ić, Vladan
1995-11-01
An algorithm is presented that formalizes different steps in a classical Supersymmetric (SUSY) calculation. Based on the algorithm Dill, a symbolic software package, that can perform the calculations, is developed in the Mathematica programming language. While the algorithm is quite general, the package is created for the 4 - D, N = 1 model. Nevertheless, with little modification, the package could be used for other SUSY models. The package has been tested and some of the results are presented.
Intelligent Use of CFAR Algorithms
1993-05-01
the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017
Using Collision Cones to Asses Biological Deconiction Methods
NASA Astrophysics Data System (ADS)
Brace, Natalie
For autonomous vehicles to navigate the world as efficiently and effectively as biological species, improvements are needed in terms of control strategies and estimation algorithms. Reactive collision avoidance is one specific area where biological systems outperform engineered algorithms. To better understand the discrepancy between engineered and biological systems, a collision avoidance algorithm was applied to frames of trajectory data from three biological species (Myotis velifer, Hirundo rustica, and Danio aequipinnatus). The algorithm uses information that can be sensed through visual cues (relative position and velocity) to define collision cones which are used to determine if agents are on a collision course and if so, to find a safe velocity that requires minimal deviation from the original velocity for each individual agent. Two- and three-dimensional versions of the algorithm with constant speed and maximum speed velocity requirements were considered. The obstacles provided to the algorithm were determined by the sensing range in terms of either metric or topological distance. The calculated velocities showed good correlation with observed velocities over the range of sensing parameters, indicating that the algorithm is a good basis for comparison and could potentially be improved with further study.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Direction Finding in the Presence of Complex Electro-Magnetic Environment.
1995-06-29
compiling adversely affects the resolution capabilities of the MUSIC algorithm. A technique utilizing the terminal impedance matrix is devised to...performance of the MUSIC algorithm is also investigated.Interference power, as little as 15dB below the signal power from the near field scatterer greatly...reduces.the resolution capabilities of the MUSIC algorithm. A new away configuration is devised to suppress the interference. Modification of the MUSIC
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
NASA is scheduled to launch the Orion spacecraft atop the Space Launch System on Exploration Mission 1 in late 2018. When Orion returns from its lunar sortie, it will encounter Earth's atmosphere with speeds in excess of 11 kilometers per second, and Orion will attempt its first precision-guided skip entry. A suite of flight software algorithms collectively called the Entry Monitor has been developed in order to enhance crew situational awareness and enable high levels of onboard autonomy. The Entry Monitor determines the vehicle capability footprint in real-time, provides manual piloting cues, evaluates landing target feasibility, predicts the ballistic instantaneous impact point, and provides intelligent recommendations for alternative landing sites if the primary landing site is not achievable. The primary engineering challenges of the Entry Monitor is in the algorithmic implementation in making a highly reliable, efficient set of algorithms suitable for onboard applications.
An improved reversible data hiding algorithm based on modification of prediction errors
NASA Astrophysics Data System (ADS)
Jafar, Iyad F.; Hiary, Sawsan A.; Darabkh, Khalid A.
2014-04-01
Reversible data hiding algorithms are concerned with the ability of hiding data and recovering the original digital image upon extraction. This issue is of interest in medical and military imaging applications. One particular class of such algorithms relies on the idea of histogram shifting of prediction errors. In this paper, we propose an improvement over one popular algorithm in this class. The improvement is achieved by employing a different predictor, the use of more bins in the prediction error histogram in addition to multilevel embedding. The proposed extension shows significant improvement over the original algorithm and its variations.
Fast perceptual image hash based on cascade algorithm
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya
2017-09-01
In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.
Modifications of the PCPT method for HJB equations
NASA Astrophysics Data System (ADS)
Kossaczký, I.; Ehrhardt, M.; Günther, M.
2016-10-01
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.
Dong, Yimeng; Gupta, Nirupam; Chopra, Nikhil
2016-11-01
In this paper, vulnerability of a distributed consensus seeking multi-agent system (MAS) with double-integrator dynamics against edge-bound content modification cyber attacks is studied. In particular, we define a specific edge-bound content modification cyber attack called malignant content modification attack (MCoMA), which results in unbounded growth of an appropriately defined group disagreement vector. Properties of MCoMA are utilized to design detection and mitigation algorithms so as to impart resilience in the considered MAS against MCoMA. Additionally, the proposed detection mechanism is extended to detect the general edge-bound content modification attacks (not just MCoMA). Finally, the efficacies of the proposed results are illustrated through numerical simulations.
Content modification attacks on consensus seeking multi-agent system with double-integrator dynamics
NASA Astrophysics Data System (ADS)
Dong, Yimeng; Gupta, Nirupam; Chopra, Nikhil
2016-11-01
In this paper, vulnerability of a distributed consensus seeking multi-agent system (MAS) with double-integrator dynamics against edge-bound content modification cyber attacks is studied. In particular, we define a specific edge-bound content modification cyber attack called malignant content modification attack (MCoMA), which results in unbounded growth of an appropriately defined group disagreement vector. Properties of MCoMA are utilized to design detection and mitigation algorithms so as to impart resilience in the considered MAS against MCoMA. Additionally, the proposed detection mechanism is extended to detect the general edge-bound content modification attacks (not just MCoMA). Finally, the efficacies of the proposed results are illustrated through numerical simulations.
Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm.
Sun, Peng; Chang, Shengqian; Liu, Siqi; Tao, Xiao; Wang, Chang; Zheng, Zhenrong
2018-04-16
In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.
Automatic measurement of voice onset time using discriminative structured prediction.
Sonderegger, Morgan; Keshet, Joseph
2012-12-01
A discriminative large-margin algorithm for automatic measurement of voice onset time (VOT) is described, considered as a case of predicting structured output from speech. Manually labeled data are used to train a function that takes as input a speech segment of an arbitrary length containing a voiceless stop, and outputs its VOT. The function is explicitly trained to minimize the difference between predicted and manually measured VOT; it operates on a set of acoustic feature functions designed based on spectral and temporal cues used by human VOT annotators. The algorithm is applied to initial voiceless stops from four corpora, representing different types of speech. Using several evaluation methods, the algorithm's performance is near human intertranscriber reliability, and compares favorably with previous work. Furthermore, the algorithm's performance is minimally affected by training and testing on different corpora, and remains essentially constant as the amount of training data is reduced to 50-250 manually labeled examples, demonstrating the method's practical applicability to new datasets.
NASA Astrophysics Data System (ADS)
Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando
2017-06-01
Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.
2010-01-01
Background Growing interest and burgeoning technology for discovering genetic mechanisms that influence disease processes have ushered in a flood of genetic association studies over the last decade, yet little heritability in highly studied complex traits has been explained by genetic variation. Non-additive gene-gene interactions, which are not often explored, are thought to be one source of this "missing" heritability. Methods Stochastic methods employing evolutionary algorithms have demonstrated promise in being able to detect and model gene-gene and gene-environment interactions that influence human traits. Here we demonstrate modifications to a neural network algorithm in ATHENA (the Analysis Tool for Heritable and Environmental Network Associations) resulting in clear performance improvements for discovering gene-gene interactions that influence human traits. We employed an alternative tree-based crossover, backpropagation for locally fitting neural network weights, and incorporation of domain knowledge obtainable from publicly accessible biological databases for initializing the search for gene-gene interactions. We tested these modifications in silico using simulated datasets. Results We show that the alternative tree-based crossover modification resulted in a modest increase in the sensitivity of the ATHENA algorithm for discovering gene-gene interactions. The performance increase was highly statistically significant when backpropagation was used to locally fit NN weights. We also demonstrate that using domain knowledge to initialize the search for gene-gene interactions results in a large performance increase, especially when the search space is larger than the search coverage. Conclusions We show that a hybrid optimization procedure, alternative crossover strategies, and incorporation of domain knowledge from publicly available biological databases can result in marked increases in sensitivity and performance of the ATHENA algorithm for detecting and modelling gene-gene interactions that influence a complex human trait. PMID:20875103
Research highlights: Microtechnologies for engineering the cellular environment.
Tseng, Peter; Kunze, Anja; Kittur, Harsha; Di Carlo, Dino
2014-04-07
In this issue we highlight recent microtechnology-enabled approaches to control the physical and biomolecular environment around cells: (1) developing micropatterned surfaces to quantify cell affinity choices between two adhesive patterns, (2) controlling topographical cues to align cells and improve reprogramming to a pluripotent state, and (3) controlling gradients of biomolecules to maintain pluripotency in embryonic stem cells. Quantitative readouts of cell-surface affinity in environments with several cues should open up avenues in tissue engineering where self-assembly of complex multi-cellular structures is possible by precisely engineering relative adhesive cues in three dimensional constructs. Methods of simple and local epigenetic modification of chromatin structure with microtopography and biomolecular gradients should also be of use in regenerative medicine, as well as in high-throughput quantitative analysis of external signals that impact and can be used to control cells. Overall, approaches to engineer the cellular environment will continue to be an area of further growth in the microfluidic and lab on a chip community, as the scale of the technologies seamlessly matches that of biological systems. However, because of regulations and other complexities with tissue engineered therapies, these micro-engineering approaches will likely first impact organ-on-a-chip technologies that are poised to improve drug discovery pipelines.
Unified algorithm of cone optics to compute solar flux on central receiver
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Corsi, Clotilde
2017-06-01
Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.
Global motion compensated visual attention-based video watermarking
NASA Astrophysics Data System (ADS)
Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Answer Markup Algorithms for Southeast Asian Languages.
ERIC Educational Resources Information Center
Henry, George M.
1991-01-01
Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…
USDA-ARS?s Scientific Manuscript database
Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.
1985-01-01
This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
NASA Astrophysics Data System (ADS)
Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min
2016-01-01
This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.
Implicit timing activates the left inferior parietal cortex.
Wiener, Martin; Turkeltaub, Peter E; Coslett, H Branch
2010-11-01
Coull and Nobre (2008) suggested that tasks that employ temporal cues might be divided on the basis of whether these cues are explicitly or implicitly processed. Furthermore, they suggested that implicit timing preferentially engages the left cerebral hemisphere. We tested this hypothesis by conducting a quantitative meta-analysis of eleven neuroimaging studies of implicit timing using the activation-likelihood estimation (ALE) algorithm (Turkeltaub, Eden, Jones, & Zeffiro, 2002). Our analysis revealed a single but robust cluster of activation-likelihood in the left inferior parietal cortex (supramarginal gyrus). This result is in accord with the hypothesis that the left hemisphere subserves implicit timing mechanisms. Furthermore, in conjunction with a previously reported meta-analysis of explicit timing tasks, our data support the claim that implicit and explicit timing are supported by at least partially distinct neural structures. Copyright © 2010 Elsevier Ltd. All rights reserved.
Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish.
Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian
2017-07-27
When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal's frame of reference. Despite this, many aquatic animals consistently orient and swim against oncoming flows (a behaviour known as rheotaxis) even in the absence of visual cues. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that, in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioural data that support a novel algorithm based on such local velocity gradients that fish use to avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, to measure its temporal change after swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioural algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviours in moving fluids.
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A
2008-06-01
DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.
NASA Astrophysics Data System (ADS)
Mao, Heng; Wang, Xiao; Zhao, Dazun
2009-05-01
As a wavefront sensing (WFS) tool, Baseline algorithm, which is classified as the iterative-transform algorithm of phase retrieval, estimates the phase distribution at pupil from some known PSFs at defocus planes. By using multiple phase diversities and appropriate phase unwrapping methods, this algorithm can accomplish reliable unique solution and high dynamic phase measurement. In the paper, a Baseline algorithm based wavefront sensing experiment with modification of phase unwrapping has been implemented, and corresponding Graphical User Interfaces (GUI) software has also been given. The adaptability and repeatability of Baseline algorithm have been validated in experiments. Moreover, referring to the ZYGO interferometric results, the WFS accuracy of this algorithm has been exactly calibrated.
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
Attention bias for chocolate increases chocolate consumption--an attention bias modification study.
Werthmann, Jessica; Field, Matt; Roefs, Anne; Nederkoorn, Chantal; Jansen, Anita
2014-03-01
The current study examined experimentally whether a manipulated attention bias for food cues increases craving, chocolate intake and motivation to search for hidden chocolates. To test the effect of attention for food on subsequent chocolate intake, attention for chocolate was experimentally modified by instructing participants to look at chocolate stimuli ("attend chocolate" group) or at non-food stimuli ("attend shoes" group) during a novel attention bias modification task (antisaccade task). Chocolate consumption, changes in craving and search time for hidden chocolates were assessed. Eye-movement recordings were used to monitor the accuracy during the experimental attention modification task as possible moderator of effects. Regression analyses were conducted to test the effect of attention modification and modification accuracy on chocolate intake, craving and motivation to search for hidden chocolates. Results showed that participants with higher accuracy (+1 SD), ate more chocolate when they had to attend to chocolate and ate less chocolate when they had to attend to non-food stimuli. In contrast, for participants with lower accuracy (-1 SD), the results were exactly reversed. No effects of the experimental attention modification on craving or search time for hidden chocolates were found. We used chocolate as food stimuli so it remains unclear how our findings generalize to other types of food. These findings demonstrate further evidence for a link between attention for food and food intake, and provide an indication about the direction of this relationship. Copyright © 2013 Elsevier Ltd. All rights reserved.
A little sugar goes a long way: The cell biology of O-GlcNAc
2015-01-01
Unlike the complex glycans decorating the cell surface, the O-linked β-N-acetyl glucosamine (O-GlcNAc) modification is a simple intracellular Ser/Thr-linked monosaccharide that is important for disease-relevant signaling and enzyme regulation. O-GlcNAcylation requires uridine diphosphate–GlcNAc, a precursor responsive to nutrient status and other environmental cues. Alternative splicing of the genes encoding the O-GlcNAc cycling enzymes O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA) yields isoforms targeted to discrete sites in the nucleus, cytoplasm, and mitochondria. OGT and OGA also partner with cellular effectors and act in tandem with other posttranslational modifications. The enzymes of O-GlcNAc cycling act preferentially on intrinsically disordered domains of target proteins impacting transcription, metabolism, apoptosis, organelle biogenesis, and transport. PMID:25825515
Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam
2014-01-15
We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated high- to low-performance levels when performing cued motor tasks. We propose that changes in these relationships can be monitored to gauge performance increases in motor learning and rehabilitation programs. Copyright © 2013 Elsevier Inc. All rights reserved.
Enhanced object-based tracking algorithm for convective rain storms and cells
NASA Astrophysics Data System (ADS)
Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick
2018-03-01
This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.
Advanced Avionics Verification and Validation Phase II (AAV&V-II)
1999-01-01
Algorithm 2-8 2.7 The Weak Control Dependence Algorithm 2-8 2.8 The Indirect Dependence Algorithms 2-9 2.9 Improvements to the Pleiades Object...describes some modifications made to the Pleiades object management system to increase the speed of the analysis. 2.1 THE INTERPROCEDURAL CONTROL FLOW...slow as the edges in the graph increased. The time to insert edges was addressed by enhancements to the Pleiades object management system, which are
Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.
1984-02-01
a r-re complete set of equations is used, but their effect is imposed by means of a right hand side forcing function, not by means of a zonal boundary...modifications of flow-simulation algorithms The explicit finite-difference code of Magnus and are discussed. Computational tests in two dimensions...used to simplify the task of grid generation without an adverse achieve computational efficiency. More recently, effect on flow-field algorithms and
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Hu, Yi
2010-05-01
Recent research results show that combined electric and acoustic stimulation (EAS) significantly improves speech recognition in noise, and it is generally established that access to the improved F0 representation of target speech, along with the glimpse cues, provide the EAS benefits. Under noisy listening conditions, noise signals degrade these important cues by introducing undesired temporal-frequency components and corrupting harmonics structure. In this study, the potential of combining noise reduction and harmonics regeneration techniques was investigated to further improve speech intelligibility in noise by providing improved beneficial cues for EAS. Three hypotheses were tested: (1) noise reduction methods can improve speech intelligibility in noise for EAS; (2) harmonics regeneration after noise reduction can further improve speech intelligibility in noise for EAS; and (3) harmonics sideband constraints in frequency domain (or equivalently, amplitude modulation in temporal domain), even deterministic ones, can provide additional benefits. Test results demonstrate that combining noise reduction and harmonics regeneration can significantly improve speech recognition in noise for EAS, and it is also beneficial to preserve the harmonics sidebands under adverse listening conditions. This finding warrants further work into the development of algorithms that regenerate harmonics and the related sidebands for EAS processing under noisy conditions.
NASA Astrophysics Data System (ADS)
Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae
2008-03-01
Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.
NASA Technical Reports Server (NTRS)
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt; Fill, Thomas
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. NASA is also currently designing the next evolution of SLS, the Block-1B. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm (of Space Shuttle heritage) for closed loop guidance. To accommodate vehicle capabilities and design for future evolutions of SLS, modifications were made to PEG for Block-1 to handle multi-phase burns, provide PEG updated propulsion information, and react to a core stage engine out. In addition, due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) and EUS carrying out Lunar Vicinity and Earth Escape missions, certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions to account for long burn arcs and target translunar and hyperbolic orbits. This paper describes the design and implementation of modifications to the Block-1 PEG algorithm as compared to Space Shuttle. Furthermore, this paper illustrates challenges posed by the Block-1B vehicle and the required PEG enhancements. These improvements make PEG capable for use on the SLS Block-1B vehicle as part of the Guidance, Navigation, and Control (GN&C) System.
The role of optical flow in automated quality assessment of full-motion video
NASA Astrophysics Data System (ADS)
Harguess, Josh; Shafer, Scott; Marez, Diego
2017-09-01
In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Using ant-behavior-based simulation model AntWeb to improve website organization
NASA Astrophysics Data System (ADS)
Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.
2002-03-01
Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.
Mapping Base Modifications in DNA by Transverse-Current Sequencing
NASA Astrophysics Data System (ADS)
Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.
2018-02-01
Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Therapeutic home adaptations for older adults with disabilities.
Unwin, Brian K; Andrews, Christopher M; Andrews, Patrick M; Hanson, Janice L
2009-11-01
Family physicians commonly care for older patients with disabilities. Many of these patients need help maintaining a therapeutic home environment to preserve their comfort and independence. Patients often have little time to decide how to address the limitations of newly-acquired disabilities. Physicians can provide patients with general recommendations in home modification after careful history and assessment. Universal design features, such as one-story living, no-step entries, and wide hallways and doors, are key adaptations for patients with physical disabilities. Home adaptations for patients with dementia include general safety measures such as grab bars and door alarms, and securing potentially hazardous items, such as cleaning supplies and medications. Improved lighting and color contrast, enlarged print materials, and vision aids can assist patients with limited vision. Patients with hearing impairments may benefit from interventions that provide supplemental visual and vibratory cues and alarms. Although funding sources are available, home modification is often a nonreimbursed expense. However, sufficient home modifications may allow the patient and caregivers to safely remain in the home without transitioning to a long-term care facility.
Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.
Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K
2002-11-01
Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.
Bikard, Yann; Chen, Wei; Liu, Tong; Li, Hong; Jendrossek, Dieter; Cohen, Alejandro; Pavlov, Evgeny; Rohacs, Tibor; Zakharian, Eleonora
2013-01-01
SUMMARY The TRPM8 ion channel is expressed in sensory neurons and is responsible for sensing environmental cues such as cold temperatures and chemical compounds, including menthol and icilin. The channel functional activity is regulated by various physical and chemical factors, and is likely to be pre-conditioned by its molecular composition. Our studies indicate that TRPM8 channel forms a structural-functional complex with the polyester, poly-(R)-3hydroxybutyrate (PHB). We identified by mass spectrometry a number of PHB-modified peptides in the N-terminus of the TRPM8 protein and in its extracellular S3–S4 linker. Removal of PHB by enzymatic hydrolysis, and site-directed mutagenesis of both the serine residues that serve as covalent anchors for PHB and adjacent hydrophobic residues that interact with the methyl groups of the polymer, resulted in significant inhibition of TRPM8 channel activity. We conclude that the TRPM8 channel undergoes post-translational modification by PHB and that this modification is required for its normal function. PMID:23850286
Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift
2004-05-01
system . We propose a real- time algorithm to infer depth and build 3-dimensional coarse maps for objects through the analysis of cues provided by an... system is well defined at the boundary of these regions (although the derivatives are not). A time domain analysis is presented for a piece-linear... Analysis of Multivariable Systems ......................... 266 D.3.1 Networks of Multiple Neural Oscillators ................. 266 D.3.2 Networks of
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
Autonomous sensor manager agents (ASMA)
NASA Astrophysics Data System (ADS)
Osadciw, Lisa A.
2004-04-01
Autonomous sensor manager agents are presented as an algorithm to perform sensor management within a multisensor fusion network. The design of the hybrid ant system/particle swarm agents is described in detail with some insight into their performance. Although the algorithm is designed for the general sensor management problem, a simulation example involving 2 radar systems is presented. Algorithmic parameters are determined by the size of the region covered by the sensor network, the number of sensors, and the number of parameters to be selected. With straight forward modifications, this algorithm can be adapted for most sensor management problems.
Methods of extending crop signatures from one area to another
NASA Technical Reports Server (NTRS)
Minter, T. C. (Principal Investigator)
1979-01-01
Efforts to develop a technology for signature extension during LACIE phases 1 and 2 are described. A number of haze and Sun angle correction procedures were developed and tested. These included the ROOSTER and OSCAR cluster-matching algorithms and their modifications, the MLEST and UHMLE maximum likelihood estimation procedures, and the ATCOR procedure. All these algorithms were tested on simulated data and consecutive-day LANDSAT imagery. The ATCOR, OSCAR, and MLEST algorithms were also tested for their capability to geographically extend signatures using LANDSAT imagery.
Wehmeyer, Christoph; Falk von Rudorff, Guido; Wolf, Sebastian; Kabbe, Gabriel; Schärf, Daniel; Kühne, Thomas D; Sebastiani, Daniel
2012-11-21
We present a stochastic, swarm intelligence-based optimization algorithm for the prediction of global minima on potential energy surfaces of molecular cluster structures. Our optimization approach is a modification of the artificial bee colony (ABC) algorithm which is inspired by the foraging behavior of honey bees. We apply our modified ABC algorithm to the problem of global geometry optimization of molecular cluster structures and show its performance for clusters with 2-57 particles and different interatomic interaction potentials.
NASA Astrophysics Data System (ADS)
Wehmeyer, Christoph; Falk von Rudorff, Guido; Wolf, Sebastian; Kabbe, Gabriel; Schärf, Daniel; Kühne, Thomas D.; Sebastiani, Daniel
2012-11-01
We present a stochastic, swarm intelligence-based optimization algorithm for the prediction of global minima on potential energy surfaces of molecular cluster structures. Our optimization approach is a modification of the artificial bee colony (ABC) algorithm which is inspired by the foraging behavior of honey bees. We apply our modified ABC algorithm to the problem of global geometry optimization of molecular cluster structures and show its performance for clusters with 2-57 particles and different interatomic interaction potentials.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
The pseudo-Boolean optimization approach to form the N-version software structure
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.
2015-10-01
The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.
ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
Modification of Gaussian mixture models for data classification in high energy physics
NASA Astrophysics Data System (ADS)
Štěpánek, Michal; Franc, Jiří; Kůs, Václav
2015-01-01
In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).
Transfer of perceptual-motor training and the space adaptation syndrome
NASA Technical Reports Server (NTRS)
Kennedy, R. S.; Berbaum, K. S.; Williams, M. C.; Brannan, J.; Welch, R. B.
1987-01-01
Perceptual cue conflict may be the basis for the symptoms which are experienced by space travelers in microgravity conditions. Recovery has been suggested to take place after perceptual modification or reinterpretation. To elucidate this process, 10 subjects who repeatedly experienced a visual/vestibular conflict over trials and days, were tested in a similar but not identical perceptual situation (pseudo-Coriolis) to determine whether any savings in perceptual adaptation had occurred as compared to an unpracticed control group (N = 40). The practiced subjects experienced lessening dizziness and ataxia within and over sessions.
Effects of nanotopography on stem cell phenotypes.
Ravichandran, Rajeswari; Liao, Susan; Ng, Clarisse Ch; Chan, Casey K; Raghunath, Michael; Ramakrishna, Seeram
2009-12-31
Stem cells are unspecialized cells that can self renew indefinitely and differentiate into several somatic cells given the correct environmental cues. In the stem cell niche, stem cell-extracellular matrix (ECM) interactions are crucial for different cellular functions, such as adhesion, proliferation, and differentiation. Recently, in addition to chemical surface modifications, the importance of nanometric scale surface topography and roughness of biomaterials has increasingly becoming recognized as a crucial factor for cell survival and host tissue acceptance in synthetic ECMs. This review describes the influence of nanotopography on stem cell phenotypes.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Bilayer segmentation of webcam videos using tree-based classifiers.
Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan
2011-01-01
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.
Computerized scoring algorithms for the Autobiographical Memory Test.
Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip
2018-02-01
Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making
2016-01-01
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis. PMID:27617777
The Die Is Cast: Precision Electrophilic Modifications Contribute to Cellular Decision Making.
Long, Marcus J C; Aye, Yimon
2016-10-02
This perspective sets out to critically evaluate the scope of reactive electrophilic small molecules as unique chemical signal carriers in biological information transfer cascades. We consider these electrophilic cues as a new volatile cellular currency and compare them to canonical signaling circulation such as phosphate in terms of chemical properties, biological specificity, sufficiency, and necessity. The fact that nonenzymatic redox sensing properties are found in proteins undertaking varied cellular tasks suggests that electrophile signaling is a moonlighting phenomenon manifested within a privileged set of sensor proteins. The latest interrogations into these on-target electrophilic responses set forth a new horizon in the molecular mechanism of redox signal propagation wherein direct low-occupancy electrophilic modifications on a single sensor target are biologically sufficient to drive functional redox responses with precision timing. We detail how the various mechanisms through which redox signals function could contribute to their interesting phenotypic responses, including hormesis.
Gazzola, Andrea; Brandalise, Federico; Rubolini, Diego; Rossi, Paola; Galeotti, Paolo
2015-12-01
Neurophysiological modifications associated to phenotypic plasticity in response to predators are largely unexplored, and there is a gap of knowledge on how the information encoded in predator cues is processed by prey sensory systems. To explore these issues, we exposed Rana dalmatina embryos to dragonfly chemical cues (kairomones) up to hatching. At different times after hatching (up to 40 days), we recorded morphology and anti-predator behaviour of tadpoles from control and kairomone-treated embryo groups as well as their neural olfactory responses, by recording the activity of their mitral neurons before and after exposure to a kairomone solution. Treated embryos hatched later and hatchlings were smaller than control siblings. In addition, the tadpoles from the treated group showed a stronger anti-predator response than controls at 10 days (but not at 30 days) post-hatching, though the intensity of the contextual response to the kairomone stimulus did not differ between the two groups. Baseline neuronal activity at 30 days post-hatching, as assessed by the frequency of spontaneous excitatory postsynaptic events and by the firing rate of mitral cells, was higher among tadpoles from the treated versus the control embryo groups. At the same time, neuronal activity showed a stronger increase among tadpoles from the treated versus the control group after a local kairomone perfusion. Hence, a different contextual plasticity between treatments at the neuronal level was not mirrored by the anti-predator behavioural response. In conclusion, our experiments demonstrate ontogenetic plasticity in tadpole neuronal activity after embryonic exposure to predator cues, corroborating the evidence that early-life experience contributes to shaping the phenotype at later life stages. © 2015. Published by The Company of Biologists Ltd.
Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M
2018-05-31
Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription has been shown to enable memory formation. Indeed, HDAC inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However, less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.
Delay, Christina; Imin, Nijat; Djordjevic, Michael A
2013-12-01
The manifestation of repetitive developmental programmes during plant growth can be adjusted in response to various environmental cues. During root development, this means being able to precisely control root growth and lateral root development. Small signalling peptides have been found to play roles in many aspects of root development. One member of the CEP (C-TERMINALLY ENCODED PEPTIDE) gene family has been shown to arrest root growth. Here we report that CEP genes are widespread among seed plants but are not present in land plants that lack true branching roots or root vasculature. We have identified 10 additional CEP genes in Arabidopsis. Expression analysis revealed that CEP genes are regulated by environmental cues such as nitrogen limitation, increased salt levels, increased osmotic strength, and increased CO2 levels in both roots and shoots. Analysis of synthetic CEP variants showed that both peptide sequence and modifications of key amino acids affect CEP biological activity. Analysis of several CEP over-expression lines revealed distinct roles for CEP genes in root and shoot development. A cep3 knockout mutant showed increased root and shoot growth under a range of abiotic stress, nutrient, and light conditions. We demonstrate that CEPs are negative regulators of root development, slowing primary root growth and reducing lateral root formation. We propose that CEPs are negative regulators that mediate environmental influences on plant development.
Razmara, Asghar; Aghamolaei, Teamur; Madani, Abdoulhossain; Hosseini, Zahra; Zare, Shahram
2018-03-20
Road accidents are among the main causes of mortality. As safe and secure driving is a key strategy to reduce car injuries and offenses, the present research aimed to explore safe driving behaviours among taxi drivers based on the Health Belief Model (HBM). This study was conducted on 184 taxi drivers in Bandar Abbas who were selected based on a multiple stratified sampling method. Data were collected by a questionnaire comprised of a demographic information section along with the constructs of the HBM. Data were analysed by SPSS ver19 via a Pearson's correlation coefficient and multiple regressions. The mean age of the participants was 45.1 years (SD = 11.1). They all had, on average, 10.3 (SD = 7/5) years of taxi driving experience. Among the HBM components, cues to action and perceived benefits were shown to be positively correlated with safe driving behaviours, while perceived barriers were negatively correlated. Cues to action, perceived barriers and perceived benefits were shown to be the strongest predictors of a safe drivers' behaviour. Based on the results of this study in designing health promotion programmes to improve safe driving behaviours among taxi drivers, cues to action, perceived benefits and perceived barriers are important. Therefore, advertising, the design of information campaigns, emphasis on the benefits of safe driving behaviours and modification barriers are recommended.
NASA Astrophysics Data System (ADS)
Gupta, Navarun
2003-10-01
One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield "Model HRTFs" that can create elevation effects. Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues.
Mishra, Ajay; Aloimonos, Yiannis
2009-01-01
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M.
Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less
A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish
Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian
2017-01-01
When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water1. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal’s frame of reference2. In spite of this, many aquatic animals consistently orient and swim against oncoming flows (a behavior known as rheotaxis) even in the absence of visual cues3,4. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioral data that support a novel algorithm based on such local velocity gradients that fish use to efficiently avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, measure its temporal change following swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioral algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviors in moving fluids. PMID:28700578
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, J; Bradshaw, B; Godette, K
Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less
NASA Astrophysics Data System (ADS)
Akinin, M. V.; Akinina, N. V.; Klochkov, A. Y.; Nikiforov, M. B.; Sokolova, A. V.
2015-05-01
The report reviewed the algorithm fuzzy c-means, performs image segmentation, give an estimate of the quality of his work on the criterion of Xie-Beni, contain the results of experimental studies of the algorithm in the context of solving the problem of drawing up detailed two-dimensional maps with the use of unmanned aerial vehicles. According to the results of the experiment concluded that the possibility of applying the algorithm in problems of decoding images obtained as a result of aerial photography. The considered algorithm can significantly break the original image into a plurality of segments (clusters) in a relatively short period of time, which is achieved by modification of the original k-means algorithm to work in a fuzzy task.
Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks
NASA Technical Reports Server (NTRS)
Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.
2017-01-01
The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.
Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun
2018-05-01
While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes. Copyright © 2018 Elsevier B.V. All rights reserved.
Limitations and requirements of content-based multimedia authentication systems
NASA Astrophysics Data System (ADS)
Wu, Chai W.
2001-08-01
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
NASA Astrophysics Data System (ADS)
Weerts, A.; Wood, A. W.; Clark, M. P.; Carney, S.; Day, G. N.; Lemans, M.; Sumihar, J.; Newman, A. J.
2014-12-01
In the US, the forecasting approach used by the NWS River Forecast Centers and other regional organizations such as the Bonneville Power Administration (BPA) or Tennessee Valley Authority (TVA) has traditionally involved manual model input and state modifications made by forecasters in real-time. This process is time consuming and requires expert knowledge and experience. The benefits of automated data assimilation (DA) as a strategy for avoiding manual modification approaches have been demonstrated in research studies (eg. Seo et al., 2009). This study explores the usage of various ensemble DA algorithms within the operational platform used by TVA. The final goal is to identify a DA algorithm that will guide the manual modification process used by TVA forecasters and realize considerable time gains (without loss of quality or even enhance the quality) within the forecast process. We evaluate the usability of various popular algorithms for DA that have been applied on a limited basis for operational hydrology. To this end, Delft-FEWS was wrapped (via piwebservice) in OpenDA to enable execution of FEWS workflows (and the chained models within these workflows, including SACSMA, UNITHG and LAGK) in a DA framework. Within OpenDA, several filter methods are available. We considered 4 algorithms: particle filter (RRF), Ensemble Kalman Filter and Asynchronous Ensemble Kalman and Particle filter. Retrospective simulation results for one location and algorithm (AEnKF) are illustrated in Figure 1. The initial results are promising. We will present verification results for these methods (and possible more) for a variety of sub basins in the Tennessee River basin. Finally, we will offer recommendations for guided DA based on our results. References Seo, D.-J., L. Cajina, R. Corby and T. Howieson, 2009: Automatic State Updating for Operational Streamflow Forecasting via Variational Data Assimilation, 367, Journal of Hydrology, 255-275. Figure 1. Retrospectively simulated streamflow for the headwater basin above Powell River at Jonesville (red is observed flow, blue is simulated flow without DA, black is simulated flow with DA)
NASA Technical Reports Server (NTRS)
Metcalfe, A. G.; Bodenheimer, R. E.
1976-01-01
A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
New vision system and navigation algorithm for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Tann, Hokchhay; Shakya, Bicky; Merchen, Alex C.; Williams, Benjamin C.; Khanal, Abhishek; Zhao, Jiajia; Ahlgren, David J.
2013-12-01
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 Intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.
Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard
2015-02-01
Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.
Reversible Data Hiding Based on DNA Computing
Xie, Yingjie
2017-01-01
Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504
Adaptive Metropolis Sampling with Product Distributions
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Lee, Chiu Fan
2005-01-01
The Metropolis-Hastings (MH) algorithm is a way to sample a provided target distribution pi(z). It works by repeatedly sampling a separate proposal distribution T(x,x') to generate a random walk {x(t)}. We consider a modification of the MH algorithm in which T is dynamically updated during the walk. The update at time t uses the {x(t' less than t)} to estimate the product distribution that has the least Kullback-Leibler distance to pi. That estimate is the information-theoretically optimal mean-field approximation to pi. We demonstrate through computer experiments that our algorithm produces samples that are superior to those of the conventional MH algorithm.
Bordnick, Patrick S; Carter, Brian L; Traylor, Amy C
2011-01-01
Virtual reality (VR), a system of human–computer interaction that allows researchers and clinicians to immerse people in virtual worlds, is gaining considerable traction as a research, education, and treatment tool. Virtual reality has been used successfully to treat anxiety disorders such as fear of flying and post-traumatic stress disorder, as an aid in stroke rehabilitation, and as a behavior modification aid in the treatment of attention deficit disorder. Virtual reality has also been employed in research on addictive disorders. Given the strong evidence that drug-dependent people are highly prone to use and relapse in the presence of environmental stimuli associated with drug use, VR is an ideal platform from which to study this relationship. Research using VR has shown that drug-dependent people react with strong craving to specific cues (e.g., cigarette packs, liquor bottles) as well as environments or settings (e.g., bar, party) associated with drug use. Virtual reality has also been used to enhance learning and generalization of relapse prevention skills in smokers by reinforcing these skills in lifelike environments. Obesity researchers and treatment professionals, building on the lessons learned from VR research in substance abuse, have the opportunity to adapt these methods for investigating their own research and treatment questions. Virtual reality is ideally suited to investigate the link between food cues and environmental settings with eating behaviors and self-report of hunger. In addition, VR can be used as a treatment tool for enhancing behavior modification goals to support healthy eating habits by reinforcing these goals in life–like situations. PMID:21527092
1989-06-23
Iterations .......................... 86 3.2 Comparison between MACH and POLAR ......................... 90 3.3 Flow Chart for VSTS Algorithm...The most recent changes are: a) development of the VSTS (velocity space topology search) algorithm for calculating particle densities b) extension...with simple analytic models. The largest modification of the MACH code was the implementation of the VSTS procedure, which constituted a complete
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
NASA Astrophysics Data System (ADS)
Bostrom, G.; Atkinson, D.; Rice, A.
2015-04-01
Cavity ringdown spectroscopy (CRDS) uses the exponential decay constant of light exiting a high-finesse resonance cavity to determine analyte concentration, typically via absorption. We present a high-throughput data acquisition system that determines the decay constant in near real time using the discrete Fourier transform algorithm on a field programmable gate array (FPGA). A commercially available, high-speed, high-resolution, analog-to-digital converter evaluation board system is used as the platform for the system, after minor hardware and software modifications. The system outputs decay constants at maximum rate of 4.4 kHz using an 8192-point fast Fourier transform by processing the intensity decay signal between ringdown events. We present the details of the system, including the modifications required to adapt the evaluation board to accurately process the exponential waveform. We also demonstrate the performance of the system, both stand-alone and incorporated into our existing CRDS system. Details of FPGA, microcontroller, and circuitry modifications are provided in the Appendix and computer code is available upon request from the authors.
Christodoulou, Asterios; Mikrogeorgis, Georgios; Vouzara, Triantafillia; Papachristou, Konstantinos; Angelopoulos, Christos; Nikolaidis, Nikolaos; Pitas, Ioannis; Lyroudia, Kleoniki
2018-02-15
In this study, the three-dimensional (3D) modification of root canal curvature was measured, after the application of Reciproc instrumentation technique, by using cone beam computed tomography (CBCT) imaging and a special algorithm developed for the 3D measurement of the curvature of the root canal. Thirty extracted upper molars were selected. Digital radiographs for each tooth were taken. Root curvature was measured by using Schneider method and they were divided into three groups, each one consisting of 10 roots, according to their curvature: Group 1 (0°-20°), Group 2 (21°-40°), Group 3 (41°-60°). CBCT imaging was applied to each tooth before and after its instrumentation, and the data were examined by using a specially developed CBCT image analysis algorithm. The instrumentation with Reciproc led to a decrease of the curvature by 30.23% (on average) in all groups. The proposed methodology proved to be able to measure the curvature of the root canal and its 3D modification after the instrumentation.
An improved genetic algorithm for designing optimal temporal patterns of neural stimulation
NASA Astrophysics Data System (ADS)
Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.
2017-12-01
Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-07-18
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.
Improved Artificial Bee Colony Algorithm Based Gravity Matching Navigation Method
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
2013-02-01
Pavlovian drug cues to produce excessive “wanting” to...motivation: Incentive salience boosts of drug or appetite states. Behavioral Brain Science 31:440-‐441...learning into motivation. In Gutkin, B. and Ahmed, S.H. (Eds.) Computational Neuroscience of Drug
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight
NASA Technical Reports Server (NTRS)
Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.
2011-01-01
Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.
NASA Astrophysics Data System (ADS)
Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.
2018-06-01
This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.
Distributed genetic algorithms for the floorplan design problem
NASA Technical Reports Server (NTRS)
Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.
1991-01-01
Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.
Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping
NASA Astrophysics Data System (ADS)
Ignakov, Dmitri
A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.
Overview of fast algorithm in 3D dynamic holographic display
NASA Astrophysics Data System (ADS)
Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian
2013-08-01
3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.
Quantum Transmemetic Intelligence
NASA Astrophysics Data System (ADS)
Piotrowski, Edward W.; Sładkowski, Jan
The following sections are included: * Introduction * A Quantum Model of Free Will * Quantum Acquisition of Knowledge * Thinking as a Quantum Algorithm * Counterfactual Measurement as a Model of Intuition * Quantum Modification of Freud's Model of Consciousness * Conclusion * Acknowledgements * References
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms
NASA Technical Reports Server (NTRS)
Liechty, Derek S.
2008-01-01
Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.
Act-and-wait time-delayed feedback control of autonomous systems
NASA Astrophysics Data System (ADS)
Pyragas, Viktoras; Pyragas, Kestutis
2018-02-01
Recently an act-and-wait modification of time-delayed feedback control has been proposed for the stabilization of unstable periodic orbits in nonautonomous dynamical systems (Pyragas and Pyragas, 2016 [30]). The modification implies a periodic switching of the feedback gain and makes the closed-loop system finite-dimensional. Here we extend this modification to autonomous systems. In order to keep constant the phase difference between the controlled orbit and the act-and-wait switching function an additional small-amplitude periodic perturbation is introduced. The algorithm can stabilize periodic orbits with an odd number of real unstable Floquet exponents using a simple single-input single-output constraint control.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
NASA Astrophysics Data System (ADS)
Park, Jun Kwon; Kang, Kwan Hyoung
2012-04-01
Contact angle (CA) hysteresis is important in many natural and engineering wetting processes, but predicting it numerically is difficult. We developed an algorithm that considers CA hysteresis when analyzing the motion of the contact line (CL). This algorithm employs feedback control of CA which decelerates CL speed to make the CL stationary in the hysteretic range of CA, and one control coefficient should be heuristically determined depending on characteristic time of the simulated system. The algorithm requires embedding only a simple additional routine with little modification of a code which considers the dynamic CA. The method is non-iterative and explicit, and also has less computational load than other algorithms. For a drop hanging on a wire, the proposed algorithm accurately predicts the theoretical equilibrium CA. For the drop impacting on a dry surface, the results of the proposed algorithm agree well with experimental results including the intermittent occurrence of the pinning of CL. The proposed algorithm is as accurate as other algorithms, but faster.
A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform
NASA Technical Reports Server (NTRS)
White, Michael J.
2004-01-01
This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.
Luo, Jiebo; Boutell, Matthew
2005-05-01
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
Roll-Out and Turn-Off Display Software for Integrated Display System
NASA Technical Reports Server (NTRS)
Johnson, Edward J., Jr.; Hyer, Paul V.
1999-01-01
This report describes the software products, system architectures and operational procedures developed by Lockheed-Martin in support of the Roll-Out and Turn-Off (ROTO) sub-element of the Low Visibility Landing and Surface Operations (LVLASO) program at the NASA Langley Research Center. The ROTO portion of this program focuses on developing technologies that aid pilots in the task of managing the deceleration of an aircraft to a pre-selected exit taxiway. This report focuses on software that produces a system of redundant deceleration cues for a pilot during the landing roll-out, and presents these cues on a head up display (HUD). The software also produces symbology for aircraft operational phases involving cruise flight, approach, takeoff, and go-around. The algorithms and data sources used to compute the deceleration guidance and generate the displays are discussed. Examples of the display formats and symbology options are presented. Logic diagrams describing the design of the ROTO software module are also given.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Wixson, Steve E.
1990-07-01
Transparent Volume Imaging began with the stereo xray in 1895 and ended for most investigators when radiation safety concerns eliminated the second view. Today, similiar images can be generated by the computer without safety hazards providing improved perception and new means of image quantification. A volumetric workstation is under development based on an operational prototype. The workstation consists of multiple symbolic and numeric processors, binocular stereo color display generator with large image memory and liquid crystal shutter, voice input and output, a 3D pointer that uses projection lenses so that structures in 3 space can be touched directly, 3D hard copy using vectograph and lenticular printing, and presentation facilities using stereo 35mm slide and stereo video tape projection. Volumetric software includes a volume window manager, Mayo Clinic's Analyze program and our Digital Stereo Microscope (DSM) algorithms. The DSM uses stereo xray-like projections, rapidly oscillating motion and focal depth cues such that detail can be studied in the spatial context of the entire set of data. Focal depth cues are generated with a lens and apeture algorithm that generates a plane of sharp focus, and multiple stereo pairs each with a different plane of sharp focus are generated and stored in the large memory for interactive selection using a physical or symbolic depth selector. More recent work is studying non-linear focussing. Psychophysical studies are underway to understand how people perce ive images on a volumetric display and how accurately 3 dimensional structures can be quantitated from these displays.
GPS-Lipid: a robust tool for the prediction of multiple lipid modification sites.
Xie, Yubin; Zheng, Yueyuan; Li, Hongyu; Luo, Xiaotong; He, Zhihao; Cao, Shuo; Shi, Yi; Zhao, Qi; Xue, Yu; Zuo, Zhixiang; Ren, Jian
2016-06-16
As one of the most common post-translational modifications in eukaryotic cells, lipid modification is an important mechanism for the regulation of variety aspects of protein function. Over the last decades, three classes of lipid modifications have been increasingly studied. The co-regulation of these different lipid modifications is beginning to be noticed. However, due to the lack of integrated bioinformatics resources, the studies of co-regulatory mechanisms are still very limited. In this work, we developed a tool called GPS-Lipid for the prediction of four classes of lipid modifications by integrating the Particle Swarm Optimization with an aging leader and challengers (ALC-PSO) algorithm. GPS-Lipid was proven to be evidently superior to other similar tools. To facilitate the research of lipid modification, we hosted a publicly available web server at http://lipid.biocuckoo.org with not only the implementation of GPS-Lipid, but also an integrative database and visualization tool. We performed a systematic analysis of the co-regulatory mechanism between different lipid modifications with GPS-Lipid. The results demonstrated that the proximal dual-lipid modifications among palmitoylation, myristoylation and prenylation are key mechanism for regulating various protein functions. In conclusion, GPS-lipid is expected to serve as useful resource for the research on lipid modifications, especially on their co-regulation.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
The Limits of Shape Recognition following Late Emergence from Blindness.
McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud
2015-09-21
Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Zheng, Bin
2017-01-01
The purpose of this study is to evaluate a new method to improve performance of computer-aided detection (CAD) schemes of screening mammograms with two approaches. In the first approach, we developed a new case based CAD scheme using a set of optimally selected global mammographic density, texture, spiculation, and structural similarity features computed from all four full-field digital mammography (FFDM) images of the craniocaudal (CC) and mediolateral oblique (MLO) views by using a modified fast and accurate sequential floating forward selection feature selection algorithm. Selected features were then applied to a “scoring fusion” artificial neural network (ANN) classification scheme to produce a final case based risk score. In the second approach, we combined the case based risk score with the conventional lesion based scores of a conventional lesion based CAD scheme using a new adaptive cueing method that is integrated with the case based risk scores. We evaluated our methods using a ten-fold cross-validation scheme on 924 cases (476 cancer and 448 recalled or negative), whereby each case had all four images from the CC and MLO views. The area under the receiver operating characteristic curve was AUC = 0.793±0.015 and the odds ratio monotonically increased from 1 to 37.21 as CAD-generated case based detection scores increased. Using the new adaptive cueing method, the region based and case based sensitivities of the conventional CAD scheme at a false positive rate of 0.71 per image increased by 2.4% and 0.8%, respectively. The study demonstrated that supplementary information can be derived by computing global mammographic density image features to improve CAD-cueing performance on the suspicious mammographic lesions. PMID:27997380
Mission Analysis Program for Solar Electric Propulsion (MAPSEP). Volume 3: Program manual
NASA Technical Reports Server (NTRS)
Huling, K. R.; Boain, R. J.; Wilson, T.; Hong, P. E.; Shults, G. L.
1974-01-01
The internal structure of MAPSEP is described. Topics discussed include: macrologic, variable definition, subroutines, and logical flow. Information is given to facilitate modifications to the models and algorithms of MAPSEP.
Bordnick, Patrick S; Carter, Brian L; Traylor, Amy C
2011-03-01
Virtual reality (VR), a system of human-computer interaction that allows researchers and clinicians to immerse people in virtual worlds, is gaining considerable traction as a research, education, and treatment tool. Virtual reality has been used successfully to treat anxiety disorders such as fear of flying and post-traumatic stress disorder, as an aid in stroke rehabilitation, and as a behavior modification aid in the treatment of attention deficit disorder. Virtual reality has also been employed in research on addictive disorders. Given the strong evidence that drug-dependent people are highly prone to use and relapse in the presence of environmental stimuli associated with drug use, VR is an ideal platform from which to study this relationship. Research using VR has shown that drug-dependent people react with strong craving to specific cues (e.g., cigarette packs, liquor bottles) as well as environments or settings (e.g., bar, party) associated with drug use. Virtual reality has also been used to enhance learning and generalization of relapse prevention skills in smokers by reinforcing these skills in lifelike environments. Obesity researchers and treatment professionals, building on the lessons learned from VR research in substance abuse, have the opportunity to adapt these methods for investigating their own research and treatment questions. Virtual reality is ideally suited to investigate the link between food cues and environmental settings with eating behaviors and self-report of hunger. In addition, VR can be used as a treatment tool for enhancing behavior modification goals to support healthy eating habits by reinforcing these goals in life-like situations. © 2011 Diabetes Technology Society.
Improved perception of music with a harmonic based algorithm for cochlear implants.
Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E
2013-07-01
The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1998-08-01
The combat cueing (CBT-Q) research effort will develop and demonstrate a portable tactical information system that will enhance the effectiveness of small unit military operations by providing real-time target cueing information to individual warfighters and teams. CBT-Q consists of a network of portable radio frequency (RF) 'modules' and is controlled by a body-worn 'user station' utilizing a head mounted display . On the battlefield, CBT-Q modules will detect an enemy transmitter and instantly provide the warfighter with an emitter's location. During the 'fog of battle', CBT-Q would tell the warfighter, 'Look here, right now individuals into the RF spectrum, resulting in faster target engagement times, increased survivability, and reduce the potential for fratricide. CBT-Q technology can support both mounted and dismounted tactical forces involved in land, sea and air warfighting operations. The CBT-Q system combines robust geolocation and signal sorting algorithms with hardware and software modularity to offer maximum utility to the warfighter. A single CBT-Q module can provide threat RF detection. Three networked CBT-Q modules can provide emitter positions using a time difference of arrival (TDOA) technique. The TDOA approach relies on timing and positioning data derived from a global positioning systems. The information will be displayed on a variety of displays, including a flat-panel head mounted display. The end results of the program will be the demonstration of the system with US Army Scouts in an operational environment.
Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre
2012-01-01
The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700
Approach Bias Modification in Food Craving-A Proof-of-Concept Study.
Brockmeyer, Timo; Hahn, Carolyn; Reetz, Christina; Schmidt, Ulrike; Friederich, Hans-Christoph
2015-09-01
The aim of the present proof-of-concept study was to test a novel cognitive bias modification (CBM) programme in an analogue sample of people with subclinical bulimic eating disorder (ED) psychopathology. Thirty participants with high levels of trait food craving were trained to make avoidance movements in response to visual food stimuli in an implicit learning paradigm. The intervention comprised ten 15-minute sessions over a 5-week course. At baseline, participants showed approach and attentional biases towards high-caloric palatable food that were both significantly reduced and turned into avoidance biases after the training. Participants also reported pronounced reductions in both trait and cue-elicited food craving and in ED symptoms as well. The overall evaluation of the training by the participants was positive. The specific CBM programme tested in this pilot trial promises to be an effective and feasible way to alter automatic action tendencies towards food in people suffering from bulimic ED psychopathology. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.
Circadian expression profiles of chromatin remodeling factor genes in Arabidopsis.
Lee, Hong Gil; Lee, Kyounghee; Jang, Kiyoung; Seo, Pil Joon
2015-01-01
The circadian clock is a biological time keeper mechanism that regulates biological rhythms to a period of approximately 24 h. The circadian clock enables organisms to anticipate environmental cycles and coordinates internal cellular physiology with external environmental cues. In plants, correct matching of the clock with the environment confers fitness advantages to plant survival and reproduction. Therefore, circadian clock components are regulated at multiple layers to fine-tune the circadian oscillation. Epigenetic regulation provides an additional layer of circadian control. However, little is known about which chromatin remodeling factors are responsible for circadian control. In this work, we analyzed circadian expression of 109 chromatin remodeling factor genes and identified 17 genes that display circadian oscillation. In addition, we also found that a candidate interacts with a core clock component, supporting that clock activity is regulated in part by chromatin modification. As an initial attempt to elucidate the relationship between chromatin modification and circadian oscillation, we identified novel regulatory candidates that provide a platform for future investigations of chromatin regulation of the circadian clock.
Flow Navigation by Smart Microswimmers via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Colabrese, Simona; Biferale, Luca; Celani, Antonio; Gustavsson, Kristian
2017-11-01
We have numerically modeled active particles which are able to acquire some limited knowledge of the fluid environment from simple mechanical cues and exert a control on their preferred steering direction. We show that those swimmers can learn effective strategies just by experience, using a reinforcement learning algorithm. As an example, we focus on smart gravitactic swimmers. These are active particles whose task is to reach the highest altitude within some time horizon, exploiting the underlying flow whenever possible. The reinforcement learning algorithm allows particles to learn effective strategies even in difficult situations when, in the absence of control, they would end up being trapped by flow structures. These strategies are highly nontrivial and cannot be easily guessed in advance. This work paves the way towards the engineering of smart microswimmers that solve difficult navigation problems. ERC AdG NewTURB 339032.
Biologically inspired computation and learning in Sensorimotor Systems
NASA Astrophysics Data System (ADS)
Lee, Daniel D.; Seung, H. S.
2001-11-01
Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.
Nickel, Katelin B; Wallace, Anna E; Warren, David K; Ball, Kelly E; Mines, Daniel; Fraser, Victoria J; Olsen, Margaret A
2016-08-16
Accurate identification of underlying health conditions is important to fully adjust for confounders in studies using insurer claims data. Our objective was to evaluate the ability of four modifications to a standard claims-based measure to estimate the prevalence of select comorbid conditions compared with national prevalence estimates. In a cohort of 11,973 privately insured women aged 18-64 years with mastectomy from 1/04-12/11 in the HealthCore Integrated Research Database, we identified diabetes, hypertension, deficiency anemia, smoking, and obesity from inpatient and outpatient claims for the year prior to surgery using four different algorithms. The standard comorbidity measure was compared to revised algorithms which included outpatient medications for diabetes, hypertension and smoking; an expanded timeframe encompassing the mastectomy admission; and an adjusted time interval and number of required outpatient claims. A χ2 test of proportions was used to compare prevalence estimates for 5 conditions in the mastectomy population to national health survey datasets (Behavioral Risk Factor Surveillance System and the National Health and Nutrition Examination Survey). Medical record review was conducted for a sample of women to validate the identification of smoking and obesity. Compared to the standard claims algorithm, use of the modified algorithms increased prevalence from 4.79 to 6.79 % for diabetes, 14.75 to 24.87 % for hypertension, 4.23 to 6.65 % for deficiency anemia, 1.78 to 12.87 % for smoking, and 1.14 to 6.31 % for obesity. The revised estimates were more similar, but not statistically equivalent, to nationally reported prevalence estimates. Medical record review revealed low sensitivity (17.86 %) to capture obesity in the claims, moderate negative predictive value (NPV, 71.78 %) and high specificity (99.15 %) and positive predictive value (PPV, 90.91 %); the claims algorithm for current smoking had relatively low sensitivity (62.50 %) and PPV (50.00 %), but high specificity (92.19 %) and NPV (95.16 %). Modifications to a standard comorbidity measure resulted in prevalence estimates that were closer to expected estimates for non-elderly women than the standard measure. Adjustment of the standard claims algorithm to identify underlying comorbid conditions should be considered depending on the specific conditions and the patient population studied.
Gamut extension for cinema: psychophysical evaluation of the state of the art and a new algorithm
NASA Astrophysics Data System (ADS)
Zamir, Syed Waqas; Vazquez-Corral, Javier; Bertalmío, Marcelo
2015-03-01
Wide gamut digital display technology, in order to show its full potential in terms of colors, is creating an opportunity to develop gamut extension algorithms (GEAs). To this end, in this work we present two contributions. First we report a psychophysical evaluation of GEAs specifically for cinema using a digital cinema projector under cinematic (low ambient light) conditions; to the best of our knowledge this is the first evaluation of this kind reported in the literature. Second, we propose a new GEA by introducing simple but key modifications to the algorithm of Zamir et al. This new algorithm performs well in terms of skin tones and memory colors, with results that look natural and which are free from artifacts.
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
An Invisible Text Watermarking Algorithm using Image Watermark
NASA Astrophysics Data System (ADS)
Jalil, Zunera; Mirza, Anwar M.
Copyright protection of digital contents is very necessary in today's digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain
Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.
2016-01-01
One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less
Alien Genetic Algorithm for Exploration of Search Space
NASA Astrophysics Data System (ADS)
Patel, Narendra; Padhiyar, Nitin
2010-10-01
Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.
NASA Astrophysics Data System (ADS)
Gramajo, German G.
This thesis presents an algorithm for a search and coverage mission that has increased autonomy in generating an ideal trajectory while explicitly considering the available energy in the optimization. Further, current algorithms used to generate trajectories depend on the operator providing a discrete set of turning rate requirements to obtain an optimal solution. This work proposes an additional modification to the algorithm so that it optimizes the trajectory for a range of turning rates instead of a discrete set of turning rates. This thesis conducts an evaluation of the algorithm with variation in turn duration, entry-heading angle, and entry point. Comparative studies of the algorithm with existing method indicates improved autonomy in choosing the optimization parameters while producing trajectories with better coverage area and closer final distance to the desired terminal point.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
Adapting a Navier-Stokes code to the ICL-DAP
NASA Technical Reports Server (NTRS)
Grosch, C. E.
1985-01-01
The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Modifications to Improve Data Acquisition and Analysis for Camouflage Design
1983-01-01
terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the
Rehman, Rizwana; Everhart, Amanda; Frontera, Alfred T; Kelly, Pamela R; Lopez, Maria; Riley, Denise; Sajan, Sheela; Schooff, David M; Tran, Tung T; Husain, Aatif M
2016-11-01
Identification of epilepsy patients from administrative data in large managed healthcare organizations is a challenging task. The objectives of this report are to describe the implementation of an established algorithm and different modifications for the estimation of epilepsy prevalence in the Veterans Health Administration (VHA). For the prevalence estimation during a given time period patients prescribed anti-epileptic drugs and having seizure diagnoses on clinical encounters were identified. In contrast to the established algorithm, which required inclusion of diagnoses data from the time period of interest only, variants were tested by considering diagnoses data beyond prevalence period for improving sensitivity. One variant excluded data from diagnostic EEG and LTM clinics to improve specificity. Another modification also required documentation of seizures on the problem list (electronic list of patients' established diagnoses). Of the variants tested, the one excluding information from diagnostic clinics and extending time beyond base period of interest for clinical encounters was determined to be superior. It can be inferred that the number of patients receiving care for epilepsy in the VHA ranges between 74,000 and 87,000. In the wake of the recent implementation of ICD-10 codes in the VHA, minor tweaks are needed for future prevalence estimation due to significant efforts presented. This review is not only beneficial for researchers interested in VHA related data but can also be helpful for managed healthcare organizations involved in epilepsy care aiming at accurate identification of patients from large administrative databases. Published by Elsevier B.V.
A Standardized Relative Resource Cost Model for Medical Care: Application to Cancer Control Programs
2013-01-01
Medicare data represent 75% of aged and permanently disabled Medicare beneficiaries enrolled in the fee-for-service (FFS) indemnity option, but the data omit 25% of beneficiaries enrolled in Medicare Advantage health maintenance organizations (HMOs). Little research has examined how longitudinal patterns of utilization differ between HMOs and FFS. The Burden of Cancer Study developed and implemented an algorithm to assign standardized relative costs to HMO and Medicare FFS data consistently across time and place. Medicare uses 15 payment systems to reimburse FFS providers for covered services. The standardized relative resource cost algorithm (SRRCA) adapts these various payment systems to utilization data. We describe the rationale for modifications to the Medicare payment systems and discuss the implications of these modifications. We applied the SRRCA to data from four HMO sites and the linked Surveillance, Epidemiology, and End Results–Medicare data. Some modifications to Medicare payment systems were required, because data elements needed to categorize utilization were missing from both data sources. For example, data were not available to create episodes for home health services received, so we assigned costs per visit based on visit type (nurse, therapist, and aide). For inpatient utilization, we modified Medicare’s payment algorithm by changing it from a flat payment per diagnosis-related group to daily rates for diagnosis-related groups to differentiate shorter versus longer stays. The SRRCA can be used in multiple managed care plans and across multiple FFS delivery systems within the United States to create consistent relative cost data for economic analyses. Prior to international use of the SRRCA, data need to be standardized. PMID:23962514
An algorithm to compute the sequency ordered Walsh transform
NASA Technical Reports Server (NTRS)
Larsen, H.
1976-01-01
A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.
Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1998-01-01
Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.
Cochlear implant in Hong Kong Cantonese.
Tang, S O; Luk, W S; Lau, C C; So, K W; Wong, C M; Yiu, M L; Kwok, C L
1990-11-01
Cochlear implant surgery was performed in four Cantonese-speaking postlingually deaf Chinese adults, using the House/3M single channel device. This article outlines the methodology, including preoperative assessment and postoperative rehabilitation; and explains the necessary modifications in speech and audiologic work-up in Cantonese-speaking patients. Salient features of Cantonese phonetics, especially its tonal characteristics, are described. The findings of the study are presented. The results of the cochlear implant would suggest a performance superior to that of the hearing aid. Furthermore, the cochlear implant is able to detect tonal cues. This quality of the cochlear implant may prove to be a valuable asset to a tonal language-speaking cochlear implantee.
Technical Note: Improving the VMERGE treatment planning algorithm for rotational radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaddy, Melissa R., E-mail: mrgaddy@ncsu.edu; Papp,
2016-07-15
Purpose: The authors revisit the VMERGE treatment planning algorithm by Craft et al. [“Multicriteria VMAT optimization,” Med. Phys. 39, 686–696 (2012)] for arc therapy planning and propose two changes to the method that are aimed at improving the achieved trade-off between treatment time and plan quality at little additional planning time cost, while retaining other desirable properties of the original algorithm. Methods: The original VMERGE algorithm first computes an “ideal,” high quality but also highly time consuming treatment plan that irradiates the patient from all possible angles in a fine angular grid with a highly modulated beam and then makesmore » this plan deliverable within practical treatment time by an iterative fluence map merging and sequencing algorithm. We propose two changes to this method. First, we regularize the ideal plan obtained in the first step by adding an explicit constraint on treatment time. Second, we propose a different merging criterion that comprises of identifying and merging adjacent maps whose merging results in the least degradation of radiation dose. Results: The effect of both suggested modifications is evaluated individually and jointly on clinical prostate and paraspinal cases. Details of the two cases are reported. Conclusions: In the authors’ computational study they found that both proposed modifications, especially the regularization, yield noticeably improved treatment plans for the same treatment times than what can be obtained using the original VMERGE method. The resulting plans match the quality of 20-beam step-and-shoot IMRT plans with a delivery time of approximately 2 min.« less
A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion
NASA Astrophysics Data System (ADS)
Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen
2017-09-01
In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.
Human-based percussion and self-similarity detection in electroacoustic music
NASA Astrophysics Data System (ADS)
Mills, John Anderson, III
Electroacoustic music is music that uses electronic technology for the compositional manipulation of sound, and is a unique genre of music for many reasons. Analyzing electroacoustic music requires special measures, some of which are integrated into the design of a preliminary percussion analysis tool set for electroacoustic music. This tool set is designed to incorporate the human processing of music and sound. Models of the human auditory periphery are used as a front end to the analysis algorithms. The audio properties of percussivity and self-similarity are chosen as the focus because these properties are computable and informative. A collection of human judgments about percussion was undertaken to acquire clearly specified, sound-event dimensions that humans use as a percussive cue. A total of 29 participants was asked to make judgments about the percussivity of 360 pairs of synthesized snare-drum sounds. The grouped results indicate that of the dimensions tested rise time is the strongest cue for percussivity. String resonance also has a strong effect, but because of the complex nature of string resonance, it is not a fundamental dimension of a sound event. Gross spectral filtering also has an effect on the judgment of percussivity but the effect is weaker than for rise time and string resonance. Gross spectral filtering also has less effect when the stronger cue of rise time is modified simultaneously. A percussivity-profile algorithm (PPA) is designed to identify those instants in pieces of music that humans also would identify as percussive. The PPA is implemented using a time-domain, channel-based approach and psychoacoustic models. The input parameters are tuned to maximize performance at matching participants' choices in the percussion-judgment collection. After the PPA is tuned, the PPA then is used to analyze pieces of electroacoustic music. Real electroacoustic music introduces new challenges for the PPA, though those same challenges might affect human judgment as well. A similarity matrix is combined with the PPA in order to find self-similarity in the percussive sounds of electroacoustic music. This percussive similarity matrix is then used to identify structural characteristics in two pieces of electroacoustic music.
Kumar, U A; Jayaram, M
2013-07-01
The purpose of this study was to evaluate the effect of lengthening of voice onset time and burst duration of selected speech stimuli on perception by individuals with auditory dys-synchrony. This is the second of a series of articles reporting the effect of signal enhancing strategies on speech perception by such individuals. Two experiments were conducted: (1) assessment of the 'just-noticeable difference' for voice onset time and burst duration of speech sounds; and (2) assessment of speech identification scores when speech sounds were modified by lengthening the voice onset time and the burst duration in units of one just-noticeable difference, both in isolation and in combination with each other plus transition duration modification. Lengthening of voice onset time as well as burst duration improved perception of voicing. However, the effect of voice onset time modification was greater than that of burst duration modification. Although combined lengthening of voice onset time, burst duration and transition duration resulted in improved speech perception, the improvement was less than that due to lengthening of transition duration alone. These results suggest that innovative speech processing strategies that enhance temporal cues may benefit individuals with auditory dys-synchrony.
Bruce, Carrie; Brush, Jennifer A; Sanford, Jon A; Calkins, Margaret P
2013-04-01
Communication dysfunction that results from dementia can be exacerbated by environmental barriers such as inadequate lighting, noisy conditions, poor or absent environmental cues, and visual clutter. Speech-language pathologists (SLPs) should address these environmental barriers as part of a comprehensive treatment plan for clients with dementia. The Environment and Communication Assessment Toolkit for Dementia Care (ECAT) was evaluated by SLPs to determine: (1) changes in awareness of environmental factors prior to and after training; (2) impact of the ECAT on practice as measured by changes in the number of environmental modifications recommended and made prior to and after training; (3) utility of the information as measured by the helpfulness, amount of new information, and usefulness of the ECAT; and (4) usability of the ECAT materials based on ease of use. The SLPs used the ECAT with clients with dementia who had functional limitations and required substantial assistance with daily activities. Results indicate that the ECAT is an effective tool for SLPs, providing information about the impact of the environment on communication and supplying sufficient resources to make recommendations and implement effective interventions. The ECAT successfully increased awareness of environmental modifications, influenced the practice of recommending environmental modifications, and had utility in diverse aspects of clinical practice. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Algorithm Updates for the Fourth SeaWiFS Data Reprocessing
NASA Technical Reports Server (NTRS)
Hooker, Stanford, B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Robinson, Wayne D.; Feldman, Gene Carl; Bailey, Sean W.
2003-01-01
The efforts to improve the data quality for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data products have continued, following the third reprocessing of the global data set in May 2000. Analyses have been ongoing to address all aspects of the processing algorithms, particularly the calibration methodologies, atmospheric correction, and data flagging and masking. All proposed changes were subjected to rigorous testing, evaluation and validation. The results of these activities culminated in the fourth reprocessing, which was completed in July 2002. The algorithm changes, which were implemented for this reprocessing, are described in the chapters of this volume. Chapter 1 presents an overview of the activities leading up to the fourth reprocessing, and summarizes the effects of the changes. Chapter 2 describes the modifications to the on-orbit calibration, specifically the focal plane temperature correction and the temporal dependence. Chapter 3 describes the changes to the vicarious calibration, including the stray light correction to the Marine Optical Buoy (MOBY) data and improved data screening procedures. Chapter 4 describes improvements to the near-infrared (NIR) band correction algorithm. Chapter 5 describes changes to the atmospheric correction and the oceanic property retrieval algorithms, including out-of-band corrections, NIR noise reduction, and handling of unusual conditions. Chapter 6 describes various changes to the flags and masks, to increase the number of valid retrievals, improve the detection of the flag conditions, and add new flags. Chapter 7 describes modifications to the level-la and level-3 algorithms, to improve the navigation accuracy, correct certain types of spacecraft time anomalies, and correct a binning logic error. Chapter 8 describes the algorithm used to generate the SeaWiFS photosynthetically available radiation (PAR) product. Chapter 9 describes a coupled ocean-atmosphere model, which is used in one of the changes described in Chapter 4. Finally, Chapter 10 describes a comparison of results from the third and fourth reprocessings along the US. Northeast coast.
NASA Astrophysics Data System (ADS)
García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun
2016-10-01
This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Style-based classification of Chinese ink and wash paintings
NASA Astrophysics Data System (ADS)
Sheng, Jiachuan; Jiang, Jianmin
2013-09-01
Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.
Automatic attention-based prioritization of unconstrained video for compression
NASA Astrophysics Data System (ADS)
Itti, Laurent
2004-06-01
We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.
Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less
Algorithm Engineering: Concepts and Practice
NASA Astrophysics Data System (ADS)
Chimani, Markus; Klein, Karsten
Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.
2016-08-15
HLA ISSN 2059-2302 A comparative reference study for the validation of HLA-matching algorithms in the search for allogeneic hematopoietic stem cell...from different inter- national donor registries by challenging them with simulated input data and subse- quently comparing the output. This experiment...original work is properly cited, the use is non-commercial and no modifications or adaptations are made. Comparative reference validation of HLA
Image authentication using distributed source coding.
Lin, Yao-Chung; Varodayan, David; Girod, Bernd
2012-01-01
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.
Pietraszewski, David; Shaw, Alex
2015-03-01
The Asymmetric War of Attrition (AWA) model of animal conflict in evolutionary biology (Maynard Smith and Parker in Nature, 246, 15-18, 1976) suggests that an organism's decision to withdraw from a conflict is the result of adaptations designed to integrate the expected value of winning, discounted by the expected costs that would be incurred by continuing to compete, via sensitivity to proximate cues of how quickly each side can impose costs on the other (Resource Holding Potential), and how much each side will gain by winning. The current studies examine whether human conflict expectations follow the formalized logic of this model. Children aged 6-8 years were presented with third-party conflict vignettes and were then asked to predict the likely winner. Cues of ownership, hunger, size, strength, and alliance strength were systematically varied across conditions. Results demonstrate that children's expectations followed the logic of the AWA model, even in complex situations featuring multiple, competing cues, such that the actual relative costs and benefits that would accrue during such a conflict were reflected in children's expectations. Control conditions show that these modifications to conflict expectations could not have resulted from more general experimental artifacts or demand characteristics. To test the selectivity of these effects to conflict, expectations of search effort were also assessed. As predicted, they yielded a different pattern of results. These studies represent one of the first experimental tests of the AWA model in humans and suggest that future research on the psychology of ownership, conflict, and value may be aided by formalized models from evolutionary biology.
Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm
NASA Astrophysics Data System (ADS)
Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.
2018-05-01
A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)
Li, Isaac TS; Shum, Warren; Truong, Kevin
2007-01-01
Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593
160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).
Li, Isaac T S; Shum, Warren; Truong, Kevin
2007-06-07
To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Formal verification of a fault tolerant clock synchronization algorithm
NASA Technical Reports Server (NTRS)
Rushby, John; Vonhenke, Frieder
1989-01-01
A formal specification and mechanically assisted verification of the interactive convergence clock synchronization algorithm of Lamport and Melliar-Smith is described. Several technical flaws in the analysis given by Lamport and Melliar-Smith were discovered, even though their presentation is unusally precise and detailed. It seems that these flaws were not detected by informal peer scrutiny. The flaws are discussed and a revised presentation of the analysis is given that not only corrects the flaws but is also more precise and easier to follow. Some of the corrections to the flaws require slight modifications to the original assumptions underlying the algorithm and to the constraints on its parameters, and thus change the external specifications of the algorithm. The formal analysis of the interactive convergence clock synchronization algorithm was performed using the Enhanced Hierarchical Development Methodology (EHDM) formal specification and verification environment. This application of EHDM provides a demonstration of some of the capabilities of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel
Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less
NASA Technical Reports Server (NTRS)
Rajala, S. A.; Riddle, A. N.; Snyder, W. E.
1983-01-01
In Riddle and Rajala (1981), an algorithm was presented which operates on an image sequence to identify all sets of pixels having the same velocity. The algorithm operates by performing a transformation in which all pixels with the same two-dimensional velocity map to a peak in a transform space. The transform can be decomposed into applications of the one-dimensional Fourier transform and therefore can gain from the computational advantages of the FFT. The aim of this paper is the concern with the fundamental limitations of that algorithm, particularly as relates to its sensitivity to image-disturbing parameters as noise, jitter, and clutter. A modification to the algorithm is then proposed which increases its robustness in the presence of these disturbances.
Mackey, Lester; Nachman, Benjamin; Schwartzman, Ariel; ...
2016-06-01
Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets . To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets , are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variablesmore » in boosted topologies. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities.« less
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Estimation of contour motion and deformation for nonrigid object tracking
NASA Astrophysics Data System (ADS)
Shao, Jie; Porikli, Fatih; Chellappa, Rama
2007-08-01
We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
2003-01-01
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Consistency functional map propagation for repetitive patterns
NASA Astrophysics Data System (ADS)
Wang, Hao
2017-09-01
Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.
The specificity of attentional biases by type of gambling: An eye-tracking study.
McGrath, Daniel S; Meitner, Amadeus; Sears, Christopher R
2018-01-01
A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers.
Biomolecular engineering of intracellular switches in eukaryotes
Pastuszka, M.K.; Mackay, J.A.
2010-01-01
Tools to selectively and reversibly control gene expression are useful to study and model cellular functions. When optimized, these cellular switches can turn a protein's function “on” and “off” based on cues designated by the researcher. These cues include small molecules, drugs, hormones, and even temperature variations. Here we review three distinct areas in gene expression that are commonly targeted when designing cellular switches. Transcriptional switches target gene expression at the level of mRNA polymerization, with examples including the tetracycline gene induction system as well as nuclear receptors. Translational switches target the process of turning the mRNA signal into protein, with examples including riboswitches and RNA interference. Post-translational switches control how proteins interact with one another to attenuate or relay signals. Examples of post-translational modification include dimerization and intein splicing. In general, the delay times between switch and effect decreases from transcription to translation to post-translation; furthermore, the fastest switches may offer the most elegant opportunities to influence and study cell behavior. We discuss the pros and cons of these strategies, which directly influence their usefulness to study and implement drug targeting at the tissue and cellular level. PMID:21209849
Impulsivity moderates the effect of approach bias modification on healthy food consumption.
Kakoschke, Naomi; Kemps, Eva; Tiggemann, Marika
2017-10-01
The study aimed to modify approach bias for healthy and unhealthy food and to determine its effect on subsequent food consumption. In addition, we investigated the potential moderating role of impulsivity in the effect of approach bias re-training on food consumption. Participants were 200 undergraduate women (17-26 years) who were randomly allocated to one of five conditions of an approach-avoidance task varying in the training of an approach bias for healthy food, unhealthy food, and non-food cues in a single session of 10 min. Outcome variables were approach bias for healthy and unhealthy food and the proportion of healthy relative to unhealthy snack food consumed. As predicted, approach bias for healthy food significantly increased in the 'avoid unhealthy food/approach healthy food' condition. Importantly, the effect of training on snack consumption was moderated by trait impulsivity. Participants high in impulsivity consumed a greater proportion of healthy snack food following the 'avoid unhealthy food/approach healthy food' training. This finding supports the suggestion that automatic processing of appetitive cues has a greater influence on consumption behaviour in individuals with poor self-regulatory control. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rey, Martin P.; Pontzen, Andrew
2018-02-01
Recent work has studied the interplay between a galaxy's history and its observable properties using `genetically modified' cosmological zoom simulations. The approach systematically generates alternative histories for a halo, while keeping its cosmological environment fixed. Applications to date altered linear properties of the initial conditions, such as the mean overdensity of specified regions; we extend the formulation to include quadratic features, such as local variance, that determines the overall importance of smooth accretion relative to mergers in a galaxy's history. We introduce an efficient algorithm for this new class of modification and demonstrate its ability to control the variance of a region in a one-dimensional toy model. Outcomes of this work are twofold: (i) a clarification of the formulation of genetic modifications and (ii) a proof of concept for quadratic modifications leading the way to a forthcoming implementation in cosmological simulations.
Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K
2011-02-01
The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.
NASA Astrophysics Data System (ADS)
Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.
1981-02-01
Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.
Ripoll, Guillermo; Alcalde, María J; Argüello, Anastasio; Córdoba, María G; Panea, Begoña
2018-05-01
The use of milk replacers to feed suckling kids could affect the shelf life and appearance of the meat. Leg chops were evaluated by consumers and the instrumental color was measured. A machine learning algorithm was used to relate them. The aim of this experiment was to study the shelf life of the meat of kids reared with dam's milk or milk replacers and to ascertain which illuminant and instrumental color variables are used by consumers as criteria to evaluate that visual appraisal. Meat from kids reared with milk replacers was more valuable and had a longer shelf life than meat from kids reared with natural milk. Consumers used the color of the whole surface of the leg chop to assess the appearance of meat. Lightness and hue angle were the prime cues used to evaluate the appearance of meat. Illuminant D65 was more useful for relating the visual appraisal with the instrumental color using a machine learning algorithm. The machine learning algorithms showed that the underlying rules used by consumers to evaluate the appearance of suckling kid meat are not at all linear and can be computationally schematized into a simple algorithm. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Automated kidney morphology measurements from ultrasound images using texture and edge analysis
NASA Astrophysics Data System (ADS)
Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin
2016-04-01
In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
Ma, Chun Wai Manson; Lam, Henry
2014-05-02
Discovering novel post-translational modifications (PTMs) to proteins and detecting specific modification sites on proteins is one of the last frontiers of proteomics. At present, hunting for post-translational modifications remains challenging in widely practiced shotgun proteomics workflows due to the typically low abundance of modified peptides and the greatly inflated search space as more potential mass shifts are considered by the search engines. Moreover, most popular search methods require that the user specifies the modification(s) for which to search; therefore, unexpected and novel PTMs will not be detected. Here a new algorithm is proposed to apply spectral library searching to the problem of open modification searches, namely, hunting for PTMs without prior knowledge of what PTMs are in the sample. The proposed tier-wise scoring method intelligently looks for unexpected PTMs by allowing mass-shifted peak matches but only when the number of matches found is deemed statistically significant. This allows the search engine to search for unexpected modifications while maintaining its ability to identify unmodified peptides effectively at the same time. The utility of the method is demonstrated using three different data sets, in which the numbers of spectrum identifications to both unmodified and modified peptides were substantially increased relative to a regular spectral library search as well as to another open modification spectral search method, pMatch.
Automated Re-Entry System using FNPEG
NASA Technical Reports Server (NTRS)
Johnson, Wyatt R.; Lu, Ping; Stachowiak, Susan J.
2017-01-01
This paper discusses the implementation and simulated performance of the FNPEG (Fully Numerical Predictor-corrector Entry Guidance) algorithm into GNC FSW (Guidance, Navigation, and Control Flight Software) for use in an autonomous re-entry vehicle. A few modifications to FNPEG are discussed that result in computational savings -- a change to the state propagator, and a modification to cross-range lateral logic. Finally, some Monte Carlo results are presented using a representative vehicle in both a high-fidelity 6-DOF (degree-of-freedom) sim as well as in a 3-DOF sim for independent validation.
Statistical Analysis of Online Eye and Face-tracking Applications in Marketing
NASA Astrophysics Data System (ADS)
Liu, Xuan
Eye-tracking and face-tracking technology have been widely adopted to study viewers' attention and emotional response. In the dissertation, we apply these two technologies to investigate effective online contents that are designed to attract and direct attention and engage viewers emotional responses. In the first part of the dissertation, we conduct a series of experiments that use eye-tracking technology to explore how online models' facial cues affect users' attention on static e-commerce websites. The joint effects of two facial cues, gaze direction and facial expression on attention, are estimated by Bayesian ANOVA, allowing various distributional assumptions. We also consider the similarities and differences in the effects of facial cues among American and Chinese consumers. This study offers insights on how to attract and retain customers' attentions for advertisers that use static advertisement on various websites or ad networks. In the second part of the dissertation, we conduct a face-tracking study where we investigate the relation between experiment participants' emotional responseswhile watching comedy movie trailers and their watching intentions to the actual movies. Viewers' facial expressions are collected in real-time and converted to emo- tional responses with algorithms based on facial coding system. To analyze the data, we propose to use a joint modeling method that link viewers' longitudinal emotion measurements and their watching intentions. This research provides recommenda- tions to filmmakers on how to improve the effectiveness of movie trailers, and how to boost audiences' desire to watch the movies.
Detecting natural occlusion boundaries using local cues
DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.
2012-01-01
Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731
Topology-changing shape optimization with the genetic algorithm
NASA Astrophysics Data System (ADS)
Lamberson, Steven E., Jr.
The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.
David, Matthieu; Fertin, Guillaume; Rogniaux, Hélène; Tessier, Dominique
2017-08-04
The analysis of discovery proteomics experiments relies on algorithms that identify peptides from their tandem mass spectra. The almost exhaustive interpretation of these spectra remains an unresolved issue. At present, an important number of missing interpretations is probably due to peptides displaying post-translational modifications and variants that yield spectra that are particularly difficult to interpret. However, the emergence of a new generation of mass spectrometers that provide high fragment ion accuracy has paved the way for more efficient algorithms. We present a new software, SpecOMS, that can handle the computational complexity of pairwise comparisons of spectra in the context of large volumes. SpecOMS can compare a whole set of experimental spectra generated by a discovery proteomics experiment to a whole set of theoretical spectra deduced from a protein database in a few minutes on a standard workstation. SpecOMS can ingeniously exploit those capabilities to improve the peptide identification process, allowing strong competition between all possible peptides for spectrum interpretation. Remarkably, this software resolves the drawbacks (i.e., efficiency problems and decreased sensitivity) that usually accompany open modification searches. We highlight this promising approach using results obtained from the analysis of a public human data set downloaded from the PRIDE (PRoteomics IDEntification) database.
Cell-phone vs microphone recordings: Judging emotion in the voice.
Green, Joshua J; Eigsti, Inge-Marie
2017-09-01
Emotional states can be conveyed by vocal cues such as pitch and intensity. Despite the ubiquity of cellular telephones, there is limited information on how vocal emotional states are perceived during cell-phone transmissions. Emotional utterances (neutral, happy, angry) were elicited from two female talkers and simultaneously recorded via microphone and cell-phone. Ten-step continua (neutral to happy, neutral to angry) were generated using the straight algorithm. Analyses compared reaction time (RT) and emotion judgment as a function of recording type (microphone vs cell-phone). Logistic regression revealed no judgment differences between recording types, though there were interactions with emotion type. Multi-level model analyses indicated that RT data were best fit by a quadratic model, with slower RT at the middle of each continuum, suggesting greater ambiguity, and slower RT for cell-phone stimuli across blocks. While preliminary, results suggest that critical acoustic cues to emotion are largely retained in cell-phone transmissions, though with effects of recording source on RT, and support the methodological utility of collecting speech samples by phone.
N, Sadhasivam; R, Balamurugan; M, Pandi
2018-01-27
Objective: Epigenetic modifications involving DNA methylation and histone statud are responsible for the stable maintenance of cellular phenotypes. Abnormalities may be causally involved in cancer development and therefore could have diagnostic potential. The field of epigenomics refers to all epigenetic modifications implicated in control of gene expression, with a focus on better understanding of human biology in both normal and pathological states. Epigenomics scientific workflow is essentially a data processing pipeline to automate the execution of various genome sequencing operations or tasks. Cloud platform is a popular computing platform for deploying large scale epigenomics scientific workflow. Its dynamic environment provides various resources to scientific users on a pay-per-use billing model. Scheduling epigenomics scientific workflow tasks is a complicated problem in cloud platform. We here focused on application of an improved particle swam optimization (IPSO) algorithm for this purpose. Methods: The IPSO algorithm was applied to find suitable resources and allocate epigenomics tasks so that the total cost was minimized for detection of epigenetic abnormalities of potential application for cancer diagnosis. Result: The results showed that IPSO based task to resource mapping reduced total cost by 6.83 percent as compared to the traditional PSO algorithm. Conclusion: The results for various cancer diagnosis tasks showed that IPSO based task to resource mapping can achieve better costs when compared to PSO based mapping for epigenomics scientific application workflow. Creative Commons Attribution License
Reduction of artifacts in computer simulation of breast Cooper's ligaments
NASA Astrophysics Data System (ADS)
Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.
2016-03-01
Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.
Urbina-Villalba, German
2009-03-01
The first algorithm for Emulsion Stability Simulations (ESS) was presented at the V Conferencia Iberoamericana sobre Equilibrio de Fases y Diseño de Procesos [Luis, J.; García-Sucre, M.; Urbina-Villalba, G. Brownian Dynamics Simulation of Emulsion Stability In: Equifase 99. Libro de Actas, 1(st) Ed., Tojo J., Arce, A., Eds.; Solucion's: Vigo, Spain, 1999; Volume 2, pp. 364-369]. The former version of the program consisted on a minor modification of the Brownian Dynamics algorithm to account for the coalescence of drops. The present version of the program contains elaborate routines for time-dependent surfactant adsorption, average diffusion constants, and Ostwald ripening.
Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.
2016-01-01
PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584
Numerically robust and efficient nonlocal electron transport in 2D DRACO simulations
NASA Astrophysics Data System (ADS)
Cao, Duc; Chenhall, Jeff; Moses, Greg; Delettrez, Jacques; Collins, Tim
2013-10-01
An improved implicit algorithm based on Schurtz, Nicolai and Busquet (SNB) algorithm for nonlocal electron transport is presented. Validation with direct drive shock timing experiments and verification with the Goncharov nonlocal model in 1D LILAC simulations demonstrate the viability of this efficient algorithm for producing 2D lagrangian radiation hydrodynamics direct drive simulations. Additionally, simulations provide strong incentive to further modify key parameters within the SNB theory, namely the ``mean free path.'' An example 2D polar drive simulation to study 2D effects of the nonlocal flux as well as mean free path modifications will also be presented. This research was supported by the University of Rochester Laboratory for Laser Energetics.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation
NASA Astrophysics Data System (ADS)
Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.
2017-12-01
One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).
Boosted ARTMAP: modifications to fuzzy ARTMAP motivated by boosting theory.
Verzi, Stephen J; Heileman, Gregory L; Georgiopoulos, Michael
2006-05-01
In this paper, several modifications to the Fuzzy ARTMAP neural network architecture are proposed for conducting classification in complex, possibly noisy, environments. The goal of these modifications is to improve upon the generalization performance of Fuzzy ART-based neural networks, such as Fuzzy ARTMAP, in these situations. One of the major difficulties of employing Fuzzy ARTMAP on such learning problems involves over-fitting of the training data. Structural risk minimization is a machine-learning framework that addresses the issue of over-fitting by providing a backbone for analysis as well as an impetus for the design of better learning algorithms. The theory of structural risk minimization reveals a trade-off between training error and classifier complexity in reducing generalization error, which will be exploited in the learning algorithms proposed in this paper. Boosted ART extends Fuzzy ART by allowing the spatial extent of each cluster formed to be adjusted independently. Boosted ARTMAP generalizes upon Fuzzy ARTMAP by allowing non-zero training error in an effort to reduce the hypothesis complexity and hence improve overall generalization performance. Although Boosted ARTMAP is strictly speaking not a boosting algorithm, the changes it encompasses were motivated by the goals that one strives to achieve when employing boosting. Boosted ARTMAP is an on-line learner, it does not require excessive parameter tuning to operate, and it reduces precisely to Fuzzy ARTMAP for particular parameter values. Another architecture described in this paper is Structural Boosted ARTMAP, which uses both Boosted ART and Boosted ARTMAP to perform structural risk minimization learning. Structural Boosted ARTMAP will allow comparison of the capabilities of off-line versus on-line learning as well as empirical risk minimization versus structural risk minimization using Fuzzy ARTMAP-based neural network architectures. Both empirical and theoretical results are presented to enhance the understanding of these architectures.
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Orthogonal vector algorithm to obtain the solar vector using the single-scattering Rayleigh model.
Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Shi, Chao
2018-02-01
Information obtained from a polarization pattern in the sky provides many animals like insects and birds with vital long-distance navigation cues. The solar vector can be derived from the polarization pattern using the single-scattering Rayleigh model. In this paper, an orthogonal vector algorithm, which utilizes the redundancy of the single-scattering Rayleigh model, is proposed. We use the intersection angles between the polarization vectors as the main criteria in our algorithm. The assumption that all polarization vectors can be considered coplanar is used to simplify the three-dimensional (3D) problem with respect to the polarization vectors in our simulation. The surface-normal vector of the plane, which is determined by the polarization vectors after translation, represents the solar vector. Unfortunately, the two-directionality of the polarization vectors makes the resulting solar vector ambiguous. One important result of this study is, however, that this apparent disadvantage has no effect on the complexity of the algorithm. Furthermore, two other universal least-squares algorithms were investigated and compared. A device was then constructed, which consists of five polarized-light sensors as well as a 3D attitude sensor. Both the simulation and experimental data indicate that the orthogonal vector algorithms, if used with a suitable threshold, perform equally well or better than the other two algorithms. Our experimental data reveal that if the intersection angles between the polarization vectors are close to 90°, the solar-vector angle deviations are small. The data also support the assumption of coplanarity. During the 51 min experiment, the mean of the measured solar-vector angle deviations was about 0.242°, as predicted by our theoretical model.
Improving Search Algorithms by Using Intelligent Coordinates
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar
2004-01-01
We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2015-01-01
This paper presents an overview of the seventh revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm adds the state-based capability in support of evolving industry standards relating to airborne self-spacing.
NASA Technical Reports Server (NTRS)
Abbott, Terence S.; Swieringa, Kurt S.
2017-01-01
This paper presents an overview of the eighth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This paper supersedes the previous documentation and presents a modification to the algorithm referred to as the Airborne Spacing for Terminal Arrival Routes version 13 (ASTAR13). This airborne self-spacing concept contains both trajectory-based and state-based mechanisms for calculating the speeds required to achieve or maintain a precise spacing interval with another aircraft. The trajectory-based capability allows for spacing operations prior to the aircraft being on a common path. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm supports the evolving industry standards relating to airborne self-spacing.
The program complex for vocal recognition
NASA Astrophysics Data System (ADS)
Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey
2017-01-01
This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.
NASA Technical Reports Server (NTRS)
Hanson, Curt
2014-01-01
An adaptive augmenting control algorithm for the Space Launch System has been developed at the Marshall Space Flight Center as part of the launch vehicles baseline flight control system. A prototype version of the SLS flight control software was hosted on a piloted aircraft at the Armstrong Flight Research Center to demonstrate the adaptive controller on a full-scale realistic application in a relevant flight environment. Concerns regarding adverse interactions between the adaptive controller and a proposed manual steering mode were investigated by giving the pilot trajectory deviation cues and pitch rate command authority.
Learn Locally, Act Globally: Learning Language from Variation Set Cues
Onnis, Luca; Waterfall, Heidi R.; Edelman, Shimon
2011-01-01
Variation set structure — partial overlap of successive utterances in child-directed speech — has been shown to correlate with progress in children’s acquisition of syntax. We demonstrate the benefits of variation set structure directly: in miniature artificial languages, arranging a certain proportion of utterances in a training corpus in variation sets facilitated word and phrase constituent learning in adults. Our findings have implications for understanding the mechanisms of L1 acquisition by children, and for the development of more efficient algorithms for automatic language acquisition, as well as better methods for L2 instruction. PMID:19019350
Real-Time Symbol Extraction From Grey-Level Images
NASA Astrophysics Data System (ADS)
Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.
1988-04-01
A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.
Engineering peptide ligase specificity by proteomic identification of ligation sites.
Weeks, Amy M; Wells, James A
2018-01-01
Enzyme-catalyzed peptide ligation is a powerful tool for site-specific protein bioconjugation, but stringent enzyme-substrate specificity limits its utility. We developed an approach for comprehensively characterizing peptide ligase specificity for N termini using proteome-derived peptide libraries. We used this strategy to characterize the ligation efficiency for >25,000 enzyme-substrate pairs in the context of the engineered peptide ligase subtiligase and identified a family of 72 mutant subtiligases with activity toward N-terminal sequences that were previously recalcitrant to modification. We applied these mutants individually for site-specific bioconjugation of purified proteins, including antibodies, and in algorithmically selected combinations for sequencing of the cellular N terminome with reduced sequence bias. We also developed a web application to enable algorithmic selection of the most efficient subtiligase variant(s) for bioconjugation to user-defined sequences. Our methods provide a new toolbox of enzymes for site-specific protein modification and a general approach for rapidly defining and engineering peptide ligase specificity.
Cardiovascular Redox and Ox Stress Proteomics
Kumar, Vikas; Calamaras, Timothy Dean; Haeussler, Dagmar; Colucci, Wilson Steven; Cohen, Richard Alan; McComb, Mark Errol; Pimentel, David
2012-01-01
Abstract Significance: Oxidative post-translational modifications (OPTMs) have been demonstrated as contributing to cardiovascular physiology and pathophysiology. These modifications have been identified using antibodies as well as advanced proteomic methods, and the functional importance of each is beginning to be understood using transgenic and gene deletion animal models. Given that OPTMs are involved in cardiovascular pathology, the use of these modifications as biomarkers and predictors of disease has significant therapeutic potential. Adequate understanding of the chemistry of the OPTMs is necessary to determine what may occur in vivo and which modifications would best serve as biomarkers. Recent Advances: By using mass spectrometry, advanced labeling techniques, and antibody identification, OPTMs have become accessible to a larger proportion of the scientific community. Advancements in instrumentation, database search algorithms, and processing speed have allowed MS to fully expand on the proteome of OPTMs. In addition, the role of enzymatically reversible OPTMs has been further clarified in preclinical models. Critical Issues: The identification of OPTMs suffers from limitations in analytic detection based on the methodology, instrumentation, sample complexity, and bioinformatics. Currently, each type of OPTM requires a specific strategy for identification, and generalized approaches result in an incomplete assessment. Future Directions: Novel types of highly sensitive MS instrumentation that allow for improved separation and detection of modified proteins and peptides have been crucial in the discovery of OPTMs and biomarkers. To further advance the identification of relevant OPTMs in advanced search algorithms, standardized methods for sample processing and depository of MS data will be required. Antioxid. Redox Signal. 17, 1528–1559. PMID:22607061
Thiol-ene mediated neoglycosylation of collagen patches: a preliminary study.
Russo, Laura; Battocchio, Chiara; Secchi, Valeria; Magnano, Elena; Nappini, Silvia; Taraballi, Francesca; Gabrielli, Luca; Comelli, Francesca; Papagni, Antonio; Costa, Barbara; Polzonetti, Giovanni; Nicotra, Francesco; Natalello, Antonino; Doglia, Silvia M; Cipolla, Laura
2014-02-11
Despite the relevance of carbohydrates as cues in eliciting specific biological responses, the covalent surface modification of collagen-based matrices with small carbohydrate epitopes has been scarcely investigated. We report thereby the development of an efficient procedure for the chemoselective neoglycosylation of collagen matrices (patches) via a thiol-ene approach, between alkene-derived monosaccharides and the thiol-functionalized material surface. Synchrotron radiation-induced X-ray photoelectron spectroscopy (SR-XPS), Fourier transform-infrared (FT-IR), and enzyme-linked lectin assay (ELLA) confirmed the effectiveness of the collagen neoglycosylation. Preliminary biological evaluation in osteoarthritic models is reported. The proposed methodology can be extended to any thiolated surface for the development of smart biomaterials for innovative approaches in regenerative medicine.
Sequence tagging reveals unexpected modifications in toxicoproteomics
Dasari, Surendra; Chambers, Matthew C.; Codreanu, Simona G.; Liebler, Daniel C.; Collins, Ben C.; Pennington, Stephen R.; Gallagher, William M.; Tabb, David L.
2010-01-01
Toxicoproteomic samples are rich in posttranslational modifications (PTMs) of proteins. Identifying these modifications via standard database searching can incur significant performance penalties. Here we describe the latest developments in TagRecon, an algorithm that leverages inferred sequence tags to identify modified peptides in toxicoproteomic data sets. TagRecon identifies known modifications more effectively than the MyriMatch database search engine. TagRecon outperformed state of the art software in recognizing unanticipated modifications from LTQ, Orbitrap, and QTOF data sets. We developed user-friendly software for detecting persistent mass shifts from samples. We follow a three-step strategy for detecting unanticipated PTMs in samples. First, we identify the proteins present in the sample with a standard database search. Next, identified proteins are interrogated for unexpected PTMs with a sequence tag-based search. Finally, additional evidence is gathered for the detected mass shifts with a refinement search. Application of this technology on toxicoproteomic data sets revealed unintended cross-reactions between proteins and sample processing reagents. Twenty five proteins in rat liver showed signs of oxidative stress when exposed to potentially toxic drugs. These results demonstrate the value of mining toxicoproteomic data sets for modifications. PMID:21214251
NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.
Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C
2011-09-14
An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics
Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task
NASA Astrophysics Data System (ADS)
Laubach, Mark; Wessberg, Johan; Nicolelis, Miguel A. L.
2000-06-01
When an animal learns to make movements in response to different stimuli, changes in activity in the motor cortex seem to accompany and underlie this learning. The precise nature of modifications in cortical motor areas during the initial stages of motor learning, however, is largely unknown. Here we address this issue by chronically recording from neuronal ensembles located in the rat motor cortex, throughout the period required for rats to learn a reaction-time task. Motor learning was demonstrated by a decrease in the variance of the rats' reaction times and an increase in the time the animals were able to wait for a trigger stimulus. These behavioural changes were correlated with a significant increase in our ability to predict the correct or incorrect outcome of single trials based on three measures of neuronal ensemble activity: average firing rate, temporal patterns of firing, and correlated firing. This increase in prediction indicates that an association between sensory cues and movement emerged in the motor cortex as the task was learned. Such modifications in cortical ensemble activity may be critical for the initial learning of motor tasks.
Nanoscale Surface Modifications of Medical Implants for Cartilage Tissue Repair and Regeneration
Griffin, MF; Szarko, M; Seifailan, A; Butler, PE
2016-01-01
Background: Natural cartilage regeneration is limited after trauma or degenerative processes. Due to the clinical challenge of reconstruction of articular cartilage, research into developing biomaterials to support cartilage regeneration have evolved. The structural architecture of composition of the cartilage extracellular matrix (ECM) is vital in guiding cell adhesion, migration and formation of cartilage. Current technologies have tried to mimic the cell’s nanoscale microenvironment to improve implants to improve cartilage tissue repair. Methods: This review evaluates nanoscale techniques used to modify the implant surface for cartilage regeneration. Results: The surface of biomaterial is a vital parameter to guide cell adhesion and consequently allow for the formation of ECM and allow for tissue repair. By providing nanosized cues on the surface in the form of a nanotopography or nanosized molecules, allows for better control of cell behaviour and regeneration of cartilage. Chemical, physical and lithography techniques have all been explored for modifying the nanoscale surface of implants to promote chondrocyte adhesion and ECM formation. Conclusion: Future studies are needed to further establish the optimal nanoscale modification of implants for cartilage tissue regeneration. PMID:28217208
Messenger RNA Delivery for Tissue Engineering and Regenerative Medicine Applications.
Patel, Siddharth; Athirasala, Avathamsa; Menezes, Paula P; Ashwanikumar, N; Zou, Ting; Sahay, Gaurav; Bertassoni, Luiz E
2018-06-07
The ability to control cellular processes and precisely direct cellular reprogramming has revolutionized regenerative medicine. Recent advances in in vitro transcribed (IVT) mRNA technology with chemical modifications have led to development of methods that control spatiotemporal gene expression. Additionally, there is a current thrust toward the development of safe, integration-free approaches to gene therapy for translational purposes. In this review, we describe strategies of synthetic IVT mRNA modifications and nonviral technologies for intracellular delivery. We provide insights into the current tissue engineering approaches that use a hydrogel scaffold with genetic material. Furthermore, we discuss the transformative potential of novel mRNA formulations that when embedded in hydrogels can trigger controlled genetic manipulation to regenerate tissues and organs in vitro and in vivo. The role of mRNA delivery in vascularization, cytoprotection, and Cas9-mediated xenotransplantation is additionally highlighted. Harmonizing mRNA delivery vehicle interactions with polymeric scaffolds can be used to present genetic cues that lead to precise command over cellular reprogramming, differentiation, and secretome activity of stem cells-an ultimate goal for tissue engineering.
Mitigating Motion Base Safety Issues: The NASA LaRC CMF Implementation
NASA Technical Reports Server (NTRS)
Bryant, Richard B., Jr.; Grupton, Lawrence E.; Martinez, Debbie; Carrelli, David J.
2005-01-01
The NASA Langley Research Center (LaRC), Cockpit Motion Facility (CMF) motion base design has taken advantage of inherent hydraulic characteristics to implement safety features using hardware solutions only. Motion system safety has always been a concern and its implementation is addressed differently by each organization. Some approaches rely heavily on software safety features. Software which performs safety functions is subject to more scrutiny making its approval, modification, and development time consuming and expensive. The NASA LaRC's CMF motion system is used for research and, as such, requires that the software be updated or modified frequently. The CMF's customers need the ability to update the simulation software frequently without the associated cost incurred with safety critical software. This paper describes the CMF engineering team's approach to achieving motion base safety by designing and implementing all safety features in hardware, resulting in applications software (including motion cueing and actuator dynamic control) being completely independent of the safety devices. This allows the CMF safety systems to remain intact and unaffected by frequent research system modifications.
Małecki, Jędrzej; Jakobsson, Magnus E; Ho, Angela Y Y; Moen, Anders; Rustan, Arild C; Falnes, Pål Ø
2017-10-27
Lysine methylation is an important and much-studied posttranslational modification of nuclear and cytosolic proteins but is present also in mitochondria. However, the responsible mitochondrial lysine-specific methyltransferases (KMTs) remain largely elusive. Here, we investigated METTL12, a mitochondrial human S -adenosylmethionine (AdoMet)-dependent methyltransferase and found it to methylate a single protein in mitochondrial extracts, identified as citrate synthase (CS). Using several in vitro and in vivo approaches, we demonstrated that METTL12 methylates CS on Lys-395, which is localized in the CS active site. Interestingly, the METTL12-mediated methylation inhibited CS activity and was blocked by the CS substrate oxaloacetate. Moreover, METTL12 was strongly inhibited by the reaction product S -adenosylhomocysteine (AdoHcy). In summary, we have uncovered a novel human mitochondrial KMT that introduces a methyl modification into a metabolic enzyme and whose activity can be modulated by metabolic cues. Based on the established naming nomenclature for similar enzymes, we suggest that METTL12 be renamed CS-KMT (gene name CSKMT ). © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Dubois, Matthieu; Poeppel, David; Pelli, Denis G.
2013-01-01
To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word’s image and sound, and combine them efficiently. The brain’s machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation. PMID:23734220
Dubois, Matthieu; Poeppel, David; Pelli, Denis G
2013-01-01
To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word's image and sound, and combine them efficiently. The brain's machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation.
Object/rule integration in CLIPS. [C Language Integrated Production System
NASA Technical Reports Server (NTRS)
Donnell, Brian L.
1993-01-01
This paper gives a brief overview of the C Language Integrated Production System (CLIPS) with a focus on the object-oriented features. The advantages of an object data representation over the traditional working memory element (WME), i.e., facts, are discussed, and the implementation of the Rete inference algorithm in CLIPS is presented in detail. A few methods for achieving pattern-matching on objects with the current inference engine are given, and finally, the paper examines the modifications necessary to the Rete algorithm to allow direct object pattern-matching.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Method of generating a surface mesh
Shepherd, Jason F [Albuquerque, NM; Benzley, Steven [Provo, UT; Grover, Benjamin T [Tracy, CA
2008-03-04
A method and machine-readable medium provide a technique to generate and modify a quadrilateral finite element surface mesh using dual creation and modification. After generating a dual of a surface (mesh), a predetermined algorithm may be followed to generate and modify a surface mesh of quadrilateral elements. The predetermined algorithm may include the steps of generating two-dimensional cell regions in dual space, determining existing nodes in primal space, generating new nodes in the dual space, and connecting nodes to form the quadrilateral elements (faces) for the generated and modifiable surface mesh.
1980-10-01
faster than previous algorithms. Indeed, with only minor modifications, the standard multigrid programs solve the LCP with essentially the same efficiency... Lemna 2.2. Let Uk be the solution of the LCP (2.3), and let uk > 0 be an approximate solu- tion obtained after one or more Gk projected sweeps. Let...in Figure 3.2, Ivu IIG decreased from .293 10 to .110 10 with the expenditure of (99.039-94.400) = 4.639 work units. While minor variations do arise, a
Adaptive antenna arrays for satellite communication
NASA Technical Reports Server (NTRS)
Gupta, Inder J.
1989-01-01
The feasibility of using adaptive antenna arrays to provide interference protection in satellite communications was studied. The feedback loops as well as the sample matric inversion (SMI) algorithm for weight control were studied. Appropriate modifications in the two were made to achieve the required interference suppression. An experimental system was built to test the modified feedback loops and the modified SMI algorithm. The performance of the experimental system was evaluated using bench generated signals and signals received from TVRO geosynchronous satellites. A summary of results is given. Some suggestions for future work are also presented.
Faster Heavy Ion Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.
2013-01-01
The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.
UAV Control on the Basis of 3D Landmark Bearing-Only Observations.
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-11-27
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.
Qutrit witness from the Grothendieck constant of order four
NASA Astrophysics Data System (ADS)
Diviánszky, Péter; Bene, Erika; Vértesi, Tamás
2017-07-01
In this paper, we prove that KG(3 )
Distributed pheromone-based swarming control of unmanned air and ground vehicles for RSTA
NASA Astrophysics Data System (ADS)
Sauter, John A.; Mathews, Robert S.; Yinger, Andrew; Robinson, Joshua S.; Moody, John; Riddle, Stephanie
2008-04-01
The use of unmanned vehicles in Reconnaissance, Surveillance, and Target Acquisition (RSTA) applications has received considerable attention recently. Cooperating land and air vehicles can support multiple sensor modalities providing pervasive and ubiquitous broad area sensor coverage. However coordination of multiple air and land vehicles serving different mission objectives in a dynamic and complex environment is a challenging problem. Swarm intelligence algorithms, inspired by the mechanisms used in natural systems to coordinate the activities of many entities provide a promising alternative to traditional command and control approaches. This paper describes recent advances in a fully distributed digital pheromone algorithm that has demonstrated its effectiveness in managing the complexity of swarming unmanned systems. The results of a recent demonstration at NASA's Wallops Island of multiple Aerosonde Unmanned Air Vehicles (UAVs) and Pioneer Unmanned Ground Vehicles (UGVs) cooperating in a coordinated RSTA application are discussed. The vehicles were autonomously controlled by the onboard digital pheromone responding to the needs of the automatic target recognition algorithms. UAVs and UGVs controlled by the same pheromone algorithm self-organized to perform total area surveillance, automatic target detection, sensor cueing, and automatic target recognition with no central processing or control and minimal operator input. Complete autonomy adds several safety and fault tolerance requirements which were integrated into the basic pheromone framework. The adaptive algorithms demonstrated the ability to handle some unplanned hardware failures during the demonstration without any human intervention. The paper describes lessons learned and the next steps for this promising technology.
Emotion to emotion speech conversion in phoneme level
NASA Astrophysics Data System (ADS)
Bulut, Murtaza; Yildirim, Serdar; Busso, Carlos; Lee, Chul Min; Kazemzadeh, Ebrahim; Lee, Sungbok; Narayanan, Shrikanth
2004-10-01
Having an ability to synthesize emotional speech can make human-machine interaction more natural in spoken dialogue management. This study investigates the effectiveness of prosodic and spectral modification in phoneme level on emotion-to-emotion speech conversion. The prosody modification is performed with the TD-PSOLA algorithm (Moulines and Charpentier, 1990). We also transform the spectral envelopes of source phonemes to match those of target phonemes using LPC-based spectral transformation approach (Kain, 2001). Prosodic speech parameters (F0, duration, and energy) for target phonemes are estimated from the statistics obtained from the analysis of an emotional speech database of happy, angry, sad, and neutral utterances collected from actors. Listening experiments conducted with native American English speakers indicate that the modification of prosody only or spectrum only is not sufficient to elicit targeted emotions. The simultaneous modification of both prosody and spectrum results in higher acceptance rates of target emotions, suggesting that not only modeling speech prosody but also modeling spectral patterns that reflect underlying speech articulations are equally important to synthesize emotional speech with good quality. We are investigating suprasegmental level modifications for further improvement in speech quality and expressiveness.
Lissek, Silke; Glaubitz, Benjamin; Güntürkün, Onur; Tegenthoff, Martin
2015-01-01
Renewal in extinction learning describes the recovery of an extinguished response if the extinction context differs from the context present during acquisition and recall. Attention may have a role in contextual modulation of behavior and contribute to the renewal effect, while noradrenaline (NA) is involved in attentional processing. In this functional magnetic resonance imaging (fMRI) study we investigated the role of the noradrenergic system for behavioral and brain activation correlates of contextual extinction and renewal, with a particular focus upon hippocampus and ventromedial prefrontal cortex (PFC), which have crucial roles in processing of renewal. Healthy human volunteers received a single dose of the NA reuptake inhibitor atomoxetine prior to extinction learning. During extinction of previously acquired cue-outcome associations, cues were presented in a novel context (ABA) or in the acquisition context (AAA). In recall, all cues were again presented in the acquisition context. Atomoxetine participants (ATO) showed significantly faster extinction compared to placebo (PLAC). However, atomoxetine did not affect renewal. Hippocampal activation was higher in ATO during extinction and recall, as was ventromedial PFC activation, except for ABA recall. Moreover, ATO showed stronger recruitment of insula, anterior cingulate, and dorsolateral/orbitofrontal PFC. Across groups, cingulate, hippocampus and vmPFC activity during ABA extinction correlated with recall performance, suggesting high relevance of these regions for processing the renewal effect. In summary, the noradrenergic system appears to be involved in the modification of established associations during extinction learning and thus has a role in behavioral flexibility. The assignment of an association to a context and the subsequent decision on an adequate response, however, presumably operate largely independently of noradrenergic mechanisms. PMID:25745389
Gaylo, Alison; Schrock, Dillon C.; Fernandes, Ninoshka R. J.; Fowell, Deborah J.
2016-01-01
Effector T cells exit the inflamed vasculature into an environment shaped by tissue-specific structural configurations and inflammation-imposed extrinsic modifications. Once within interstitial spaces of non-lymphoid tissues, T cells migrate in an apparent random, non-directional, fashion. Efficient T cell scanning of the tissue environment is essential for successful location of infected target cells or encounter with antigen-presenting cells that activate the T cell’s antimicrobial effector functions. The mechanisms of interstitial T cell motility and the environmental cues that may promote or hinder efficient tissue scanning are poorly understood. The extracellular matrix (ECM) appears to play an important scaffolding role in guidance of T cell migration and likely provides a platform for the display of chemotactic factors that may help to direct the positioning of T cells. Here, we discuss how intravital imaging has provided insight into the motility patterns and cellular machinery that facilitates T cell interstitial migration and the critical environmental factors that may optimize the efficiency of effector T cell scanning of the inflamed tissue. Specifically, we highlight the local micro-positioning cues T cells encounter as they migrate within inflamed tissues, from surrounding ECM and signaling molecules, as well as a requirement for appropriate long-range macro-positioning within distinct tissue compartments or at discrete foci of infection or tissue damage. The central nervous system (CNS) responds to injury and infection by extensively remodeling the ECM and with the de novo generation of a fibroblastic reticular network that likely influences T cell motility. We examine how inflammation-induced changes to the CNS landscape may regulate T cell tissue exploration and modulate function. PMID:27790220
Gaylo, Alison; Schrock, Dillon C; Fernandes, Ninoshka R J; Fowell, Deborah J
2016-01-01
Effector T cells exit the inflamed vasculature into an environment shaped by tissue-specific structural configurations and inflammation-imposed extrinsic modifications. Once within interstitial spaces of non-lymphoid tissues, T cells migrate in an apparent random, non-directional, fashion. Efficient T cell scanning of the tissue environment is essential for successful location of infected target cells or encounter with antigen-presenting cells that activate the T cell's antimicrobial effector functions. The mechanisms of interstitial T cell motility and the environmental cues that may promote or hinder efficient tissue scanning are poorly understood. The extracellular matrix (ECM) appears to play an important scaffolding role in guidance of T cell migration and likely provides a platform for the display of chemotactic factors that may help to direct the positioning of T cells. Here, we discuss how intravital imaging has provided insight into the motility patterns and cellular machinery that facilitates T cell interstitial migration and the critical environmental factors that may optimize the efficiency of effector T cell scanning of the inflamed tissue. Specifically, we highlight the local micro-positioning cues T cells encounter as they migrate within inflamed tissues, from surrounding ECM and signaling molecules, as well as a requirement for appropriate long-range macro-positioning within distinct tissue compartments or at discrete foci of infection or tissue damage. The central nervous system (CNS) responds to injury and infection by extensively remodeling the ECM and with the de novo generation of a fibroblastic reticular network that likely influences T cell motility. We examine how inflammation-induced changes to the CNS landscape may regulate T cell tissue exploration and modulate function.
Gaudio, Jennifer L; Snowdon, Charles T
2008-11-01
Animals living in stable home ranges have many potential cues to locate food. Spatial and color cues are important for wild Callitrichids (marmosets and tamarins). Field studies have assigned the highest priority to distal spatial cues for determining the location of food resources with color cues serving as a secondary cue to assess relative ripeness, once a food source is located. We tested two hypotheses with captive cotton-top tamarins: (a) Tamarins will demonstrate higher rates of initial learning when rewarded for attending to spatial cues versus color cues. (b) Tamarins will show higher rates of correct responses when transferred from color cues to spatial cues than from spatial cues to color cues. The results supported both hypotheses. Tamarins rewarded based on spatial location made significantly more correct choices and fewer errors than tamarins rewarded based on color cues during initial learning. Furthermore, tamarins trained on color cues showed significantly increased correct responses and decreased errors when cues were reversed to reward spatial cues. Subsequent reversal to color cues induced a regression in performance. For tamarins spatial cues appear more salient than color cues in a foraging task. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Direct model reference adaptive control with application to flexible robots
NASA Technical Reports Server (NTRS)
Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.
1992-01-01
A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Transport Protocols for Wireless Mesh Networks
NASA Astrophysics Data System (ADS)
Eddie Law, K. L.
Transmission control protocol (TCP) provides reliable connection-oriented services between any two end systems on the Internet. With TCP congestion control algorithm, multiple TCP connections can share network and link resources simultaneously. These TCP congestion control mechanisms have been operating effectively in wired networks. However, performance of TCP connections degrades rapidly in wireless and lossy networks. To sustain the throughput performance of TCP connections in wireless networks, design modifications may be required accordingly in the TCP flow control algorithm, and potentially, in association with other protocols in other layers for proper adaptations. In this chapter, we explain the limitations of the latest TCP congestion control algorithm, and then review some popular designs for TCP connections to operate effectively in wireless mesh network infrastructure.
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Autonomous Instrument Placement for Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Leger, P. Chris; Maimone, Mark
2009-01-01
Autonomous Instrument Placement (AutoPlace) is onboard software that enables a Mars Exploration Rover to act autonomously in using its manipulator to place scientific instruments on or near designated rock and soil targets. Prior to the development of AutoPlace, it was necessary for human operators on Earth to plan every motion of the manipulator arm in a time-consuming process that included downlinking of images from the rover, analysis of images and creation of commands, and uplinking of commands to the rover. AutoPlace incorporates image analysis and planning algorithms into the onboard rover software, eliminating the need for the downlink/uplink command cycle. Many of these algorithms are derived from the existing groundbased image analysis and planning algorithms, with modifications and augmentations for onboard use.
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
Spectral-spatial classification of hyperspectral imagery with cooperative game
NASA Astrophysics Data System (ADS)
Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei
2018-01-01
Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
Arterial cannula shape optimization by means of the rotational firefly algorithm
NASA Astrophysics Data System (ADS)
Tesch, K.; Kaczorowska, K.
2016-03-01
This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.
Manning, Victoria; Staiger, Petra K; Hall, Kate; Garfield, Joshua B B; Flaks, Gabriella; Leung, Daniel; Hughes, Laura K; Lum, Jarrad A G; Lubman, Dan I; Verdejo-Garcia, Antonio
2016-09-01
Relapse is common in alcohol-dependent individuals and can be triggered by alcohol-related cues in the environment. It has been suggested that these individuals develop cognitive biases, in which cues automatically capture attention and elicit an approach action tendency that promotes alcohol seeking. The study aim was to examine whether cognitive bias modification (CBM) training targeting approach bias could be delivered during residential alcohol detoxification and improve treatment outcomes. Using a 2-group parallel-block (ratio 1:1) randomized controlled trial with allocation concealed to the outcome assessor, 83 alcohol-dependent inpatients received either 4 sessions of CBM training where participants were implicitly trained to make avoidance movements in response to pictures of alcoholic beverages and approach movements in response to pictures of nonalcoholic beverages, or 4 sessions of sham training (controls) delivered over 4 consecutive days during the 7-day detoxification program. The primary outcome measure was continuous abstinence at 2 weeks postdischarge. Secondary outcomes included time to relapse, frequency and quantity of alcohol consumption, and craving. Outcomes were assessed in a telephonic follow-up interview. Seventy-one (85%) participants were successfully followed up, of whom 61 completed all 4 training sessions. With an intention-to-treat approach, there was a trend for higher abstinence rates in the CBM group relative to controls (69 vs. 47%, p = 0.07); however, a per-protocol analysis revealed significantly higher abstinence rates among participants completing 4 sessions of CBM relative to controls (75 vs. 45%, p = 0.02). Craving score, time to relapse, mean drinking days, and mean standard drinks per drinking day did not differ significantly between the groups. This is the first trial demonstrating the feasibility of CBM delivered during alcohol detoxification and supports earlier research suggesting it may be a useful, low-cost adjunctive treatment to improve treatment outcomes for alcohol-dependent patients. Copyright © 2016 by the Research Society on Alcoholism.
Formation control of robotic swarm using bounded artificial forces.
Qin, Long; Zha, Yabing; Yin, Quanjun; Peng, Yong
2013-01-01
Formation control of multirobot systems has drawn significant attention in the recent years. This paper presents a potential field control algorithm, navigating a swarm of robots into a predefined 2D shape while avoiding intermember collisions. The algorithm applies in both stationary and moving targets formation. We define the bounded artificial forces in the form of exponential functions, so that the behavior of the swarm drove by the forces can be adjusted via selecting proper control parameters. The theoretical analysis of the swarm behavior proves the stability and convergence properties of the algorithm. We further make certain modifications upon the forces to improve the robustness of the swarm behavior in the presence of realistic implementation considerations. The considerations include obstacle avoidance, local minima, and deformation of the shape. Finally, detailed simulation results validate the efficiency of the proposed algorithm, and the direction of possible futrue work is discussed in the conclusions.
Formation Control of Robotic Swarm Using Bounded Artificial Forces
Zha, Yabing; Peng, Yong
2013-01-01
Formation control of multirobot systems has drawn significant attention in the recent years. This paper presents a potential field control algorithm, navigating a swarm of robots into a predefined 2D shape while avoiding intermember collisions. The algorithm applies in both stationary and moving targets formation. We define the bounded artificial forces in the form of exponential functions, so that the behavior of the swarm drove by the forces can be adjusted via selecting proper control parameters. The theoretical analysis of the swarm behavior proves the stability and convergence properties of the algorithm. We further make certain modifications upon the forces to improve the robustness of the swarm behavior in the presence of realistic implementation considerations. The considerations include obstacle avoidance, local minima, and deformation of the shape. Finally, detailed simulation results validate the efficiency of the proposed algorithm, and the direction of possible futrue work is discussed in the conclusions. PMID:24453809
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
New Secure E-mail System Based on Bio-Chaos Key Generation and Modified AES Algorithm
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, A. M.
2018-05-01
The E-mail messages exchanged between sender’s Mailbox and recipient’s Mailbox over the open systems and insecure Networks. These messages may be vulnerable to eavesdropping and itself poses a real threat to the privacy and data integrity from unauthorized persons. The E-mail Security includes the following properties (Confidentiality, Authentication, Message integrity). We need a safe encryption algorithm to encrypt Email messages such as the algorithm Advanced Encryption Standard (AES) or Data Encryption Standard DES, as well as biometric recognition and chaotic system. The proposed E-mail system security uses modified AES algorithm and uses secret key-bio-chaos that consist of biometric (Fingerprint) and chaotic system (Lu and Lorenz). This modification makes the proposed system more sensitive and random. The execution time for both encryption and decryption of the proposed system is much less from original AES, in addition to being compatible with all Mail Servers.
Modification of Prim’s algorithm on complete broadcasting graph
NASA Astrophysics Data System (ADS)
Dairina; Arif, Salmawaty; Munzir, Said; Halfiani, Vera; Ramli, Marwan
2017-09-01
Broadcasting is an information dissemination from one object to another object through communication between two objects in a network. Broadcasting for n objects can be solved by n - 1 communications and minimum time unit defined by ⌈2log n⌉ In this paper, weighted graph broadcasting is considered. The minimum weight of a complete broadcasting graph will be determined. Broadcasting graph is said to be complete if every vertex is connected. Thus to determine the minimum weight of complete broadcasting graph is equivalent to determine the minimum spanning tree of a complete graph. The Kruskal’s and Prim’s algorithm will be used to determine the minimum weight of a complete broadcasting graph regardless the minimum time unit ⌈2log n⌉ and modified Prim’s algorithm for the problems of the minimum time unit ⌈2log n⌉ is done. As an example case, here, the training of trainer problem is solved using these algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, J; Ghavidel, B; Godette, K
Purpose: To validate a knowledge-based algorithm for prostate LDR brachytherapy treatment planning. Methods: A dataset of 100 cases was compiled from an active prostate seed implant service. Cases were randomized into 10 subsets. For each subset, the 90 remaining library cases were registered to a common reference frame and then characterized on a point by point basis using principle component analysis (PCA). Each test case was converted to PCA vectors using the same process and compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. Themore » seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Any subsequent modifications were recorded that required input from a treatment planner to achieve V100>95%, V150<60%, V200<20%. To simulate operating-room planning constraints, seed activity was held constant, and the seed count could not increase. Results: The computational time required to register test-case contours and evaluate PCA similarity across the library was 10s. Preliminary analysis of 2 subsets shows that 9 of 20 test cases did not require any seed modifications to obtain an acceptable plan. Five test cases required fewer than 10 seed modifications or a grid shift. Another 5 test cases required approximately 20 seed modifications. An acceptable plan was not achieved for 1 outlier, which was substantially larger than its best match. Modifications took between 5s and 6min. Conclusion: A knowledge-based treatment planning algorithm for prostate LDR brachytherapy is being cross validated using 100 prior cases. Preliminary results suggest that for this size library, acceptable plans can be achieved without planner input in about half of the cases while varying amounts of planner input are needed in remaining cases. Computational time and planning time are compatible with clinical practice.« less
Software Performs Complex Design Analysis
NASA Technical Reports Server (NTRS)
2008-01-01
Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.
Algorithm for Determination of Orion Ascent Abort Mode Achievability
NASA Technical Reports Server (NTRS)
Tedesco, Mark B.
2011-01-01
For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.
Potter, Brian J; Matteau, Alexis; Mansour, Samer; Naim, Charbel; Riahi, Mounir; Essiambre, Richard; Montigny, Martine; Sareault, Isabelle; Gobeil, François
2017-01-01
Treatment times for primary percutaneous coronary intervention frequently exceed the recommended maximum delay. Automated "physicianless" systems of prehospital cardiac catheterization laboratory (CCL) activation show promise, but have been met with resistance over concerns regarding the potential for false positive and inappropriate activations (IAs). From 2010 to 2015, first responders performed electrocardiograms (ECGs) in the field for all patients with a complaint of chest pain or dyspnea. An automated machine diagnosis of "acute myocardial infarction" resulted in immediate CCL activation and direct transfer without transmission or human reinterpretation of the ECG prior to patient arrival. Any activation resulting from a nondiagnostic ECG (no ST-elevation) was deemed an IA, whereas activations resulting from ECG's compatible with ST-elevation myocardial infarction but without angiographic evidence of a coronary event were deemed false positive. In 2012, the referral algorithm was modified to exclude supraventricular tachycardia and left bundle branch block. There were 155 activations in the early cohort (2010-2012; prior to algorithm modification) and 313 in the late cohort (2012-2015). Algorithm modification resulted in a 42% relative decrease in the rate of IAs (12% vs 7%; P < 0.01) without a significant effect on treatment delay. A combination of prehospital automated ST-elevation myocardial infarction diagnosis and "physicianless" CCL activation is safe and effective in improving treatment delay and these results are sustainable over time. The performance of the referral algorithm in terms of IA and false positive is at least on par with systems that ensure real-time human oversight. Copyright © 2016 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
A technique is presented for finding the least squares estimates for the ultimate biochemical oxygen demand (BOD) and rate coefficient for the BOD reaction without resorting to complicated computer algorithms or subjective graphical methods. This may be used in stream water quali...
Knowledge Based Engineering for Spatial Database Management and Use
NASA Technical Reports Server (NTRS)
Peuquet, D. (Principal Investigator)
1984-01-01
The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.
Query Modification through External Sources to Support Clinical Decisions
2014-11-01
takes no medications. Physical examination is normal. The EKG shows nonspecific changes. Summary 58-year-old woman with hypertension and obesity presents...algorithm for suffix stripping. Program, 14:130–137, 1980. Reprinted in Readings in Information Retrieval, pages 313–316, 1997. M. S. Simpson, E
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
The Design and Analysis of Efficient Learning Algorithms
1991-01-01
will be c-close to the target concept with high probability. (Technically, their approach needs some minor modifications to handle, for instance, a...test in the sense that if cii = cik = cik then nothing can be concluded about the relative depth of 17, and Fk . However, our next lemnas give
Navigational strategies underlying phototaxis in larval zebrafish.
Chen, Xiuye; Engert, Florian
2014-01-01
Understanding how the brain transforms sensory input into complex behavior is a fundamental question in systems neuroscience. Using larval zebrafish, we study the temporal component of phototaxis, which is defined as orientation decisions based on comparisons of light intensity at successive moments in time. We developed a novel "Virtual Circle" assay where whole-field illumination is abruptly turned off when the fish swims out of a virtually defined circular border, and turned on again when it returns into the circle. The animal receives no direct spatial cues and experiences only whole-field temporal light changes. Remarkably, the fish spends most of its time within the invisible virtual border. Behavioral analyses of swim bouts in relation to light transitions were used to develop four discrete temporal algorithms that transform the binary visual input (uniform light/uniform darkness) into the observed spatial behavior. In these algorithms, the turning angle is dependent on the behavioral history immediately preceding individual turning events. Computer simulations show that the algorithms recapture most of the swim statistics of real fish. We discovered that turning properties in larval zebrafish are distinctly modulated by temporal step functions in light intensity in combination with the specific motor history preceding these turns. Several aspects of the behavior suggest memory usage of up to 10 swim bouts (~10 sec). Thus, we show that a complex behavior like spatial navigation can emerge from a small number of relatively simple behavioral algorithms.
Kelly, Jonathan W; McNamara, Timothy P; Bodenheimer, Bobby; Carr, Thomas H; Rieser, John J
2009-02-01
Two experiments explored the role of environmental cues in maintaining spatial orientation (sense of self-location and direction) during locomotion. Of particular interest was the importance of geometric cues (provided by environmental surfaces) and featural cues (nongeometric properties provided by striped walls) in maintaining spatial orientation. Participants performed a spatial updating task within virtual environments containing geometric or featural cues that were ambiguous or unambiguous indicators of self-location and direction. Cue type (geometric or featural) did not affect performance, but the number and ambiguity of environmental cues did. Gender differences, interpreted as a proxy for individual differences in spatial ability and/or experience, highlight the interaction between cue quantity and ambiguity. When environmental cues were ambiguous, men stayed oriented with either one or two cues, whereas women stayed oriented only with two. When environmental cues were unambiguous, women stayed oriented with one cue.
Expeditious reconciliation for practical quantum key distribution
NASA Astrophysics Data System (ADS)
Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.
2004-08-01
The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
How a surgeon becomes superman by visualization of intelligently fused multi-modalities
NASA Astrophysics Data System (ADS)
Erat, Okan; Pauly, Olivier; Weidert, Simon; Thaller, Peter; Euler, Ekkehard; Mutschler, Wolf; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
Narayanan, Shrikanth; Georgiou, Panayiotis G
2013-02-07
The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.
2009-01-01
In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.
Social Crowding during Development Causes Changes in GnRH1 DNA Methylation.
Alvarado, Sebastian G; Lenkov, Kapa; Williams, Blake; Fernald, Russell D
2015-01-01
Gestational and developmental cues have important consequences for long-term health, behavior and adaptation to the environment. In addition, social stressors cause plastic molecular changes in the brain that underlie unique behavioral phenotypes that also modulate fitness. In the adult African cichlid, Astatotilapia burtoni, growth and social status of males are both directly regulated by social interactions in a dynamic social environment, which causes a suite of plastic changes in circuits, cells and gene transcription in the brain. We hypothesized that a possible mechanism underlying some molecular changes might be DNA methylation, a reversible modification made to cytosine nucleotides that is known to regulate gene function. Here we asked whether changes in DNA methylation of the GnRH1 gene, the central regulator of the reproductive axis, were altered during development of A. burtoni. We measured changes in methylation state of the GnRH1 gene during normal development and following the gestational and developmental stress of social crowding. We found differential DNA methylation within developing juveniles between 14-, 28- and 42-day-old. Following gestational crowding of mouth brooding mothers, we saw differential methylation and transcription of GnRH1 in their offspring. Taken together, our data provides evidence for social control of GnRH1 developmental responses to gestational cues through DNA methylation.
The specificity of attentional biases by type of gambling: An eye-tracking study
Meitner, Amadeus; Sears, Christopher R.
2018-01-01
A growing body of research indicates that gamblers develop an attentional bias for gambling-related stimuli. Compared to research on substance use, however, few studies have examined attentional biases in gamblers using eye-gaze tracking, which has many advantages over other measures of attention. In addition, previous studies of attentional biases in gamblers have not directly matched type of gambler with personally-relevant gambling cues. The present study investigated the specificity of attentional biases for individual types of gambling using an eye-gaze tracking paradigm. Three groups of participants (poker players, video lottery terminal/slot machine players, and non-gambling controls) took part in one test session in which they viewed 25 sets of four images (poker, VLTs/slot machines, bingo, and board games). Participants' eye fixations were recorded throughout each 8-second presentation of the four images. The results indicated that, as predicted, the two gambling groups preferentially attended to their primary form of gambling, whereas control participants attended to board games more than gambling images. The findings have clinical implications for the treatment of individuals with gambling disorder. Understanding the importance of personally-salient gambling cues will inform the development of effective attentional bias modification treatments for problem gamblers. PMID:29385164
Julian, Kristin; Beard, Courtney; Schmidt, Norman B.; Powers, Mark B.; Smits, Jasper A. J.
2012-01-01
Cognitive theories suggest that social anxiety is maintained, in part, by an attentional bias toward threat. Recent research shows that a single session of attention modification training (AMP) reduces attention bias and vulnerability to a social stressor (Amir, Weber, Beard, Bomyea, & Taylor, 2008). In addition, exercise may augment the effects of attention training by its direct effects on attentional control and inhibition, thereby allowing participants receiving the AMP to more effectively disengage attention from the threatening cues and shift attention to the neutral cues. We attempted to replicate and extend previous findings by randomizing participants (N = 112) to a single session of: a) Exercise + attention training (EX + AMP); b) Rest + attention training (REST + AMP); c) Exercise + attention control condition (EX + ACC); or d) Rest + attention control condition (REST + ACC) prior to completing a public speaking challenge. We used identical assessment and training procedures to those employed by Amir et al. (2008). Results showed there was no effect of attention training on attention bias or anxiety reactivity to the speech challenge and no interactive effects of attention training and exercise on attention bias or anxiety reactivity to the speech challenge. The failure to replicate previous findings is discussed. PMID:22466022
Cis-regulatory Evolution of Chalcone-Synthase Expression in the Genus Arabidopsis
de Meaux, Juliette; Pop, A.; Mitchell-Olds, T.
2006-01-01
The contribution of cis-regulation to adaptive evolutionary change is believed to be essential, yet little is known about the evolutionary rules that govern regulatory sequences. Here, we characterize the short-term evolutionary dynamics of a cis-regulatory region within and among two closely related species, A. lyrata and A. halleri, and compare our findings to A. thaliana. We focused on the cis-regulatory region of chalcone synthase (CHS), a key enzyme involved in the synthesis of plant secondary metabolites. We observed patterns of nucleotide diversity that differ among species but do not depart from neutral expectations. Using intra- and interspecific F1 progeny, we have evaluated functional cis-regulatory variation in response to light and herbivory, environmental cues, which are known to induce CHS expression. We find that substantial cis-regulatory variation segregates within and among populations as well as between species, some of which results from interspecific genetic introgression. We further demonstrate that, in A. thaliana, CHS cis-regulation in response to herbivory is greater than in A. lyrata or A. halleri. Our work indicates that the evolutionary dynamics of a cis-regulatory region is characterized by pervasive functional variation, achieved mostly by modification of response modules to one but not all environmental cues. Our study did not detect the footprint of selection on this variation. PMID:17028316
Exploring the Role of Cell Wall-Related Genes and Polysaccharides during Plant Development.
Tucker, Matthew R; Lou, Haoyu; Aubert, Matthew K; Wilkinson, Laura G; Little, Alan; Houston, Kelly; Pinto, Sara C; Shirley, Neil J
2018-05-31
The majority of organs in plants are not established until after germination, when pluripotent stem cells in the growing apices give rise to daughter cells that proliferate and subsequently differentiate into new tissues and organ primordia. This remarkable capacity is not only restricted to the meristem, since maturing cells in many organs can also rapidly alter their identity depending on the cues they receive. One general feature of plant cell differentiation is a change in cell wall composition at the cell surface. Historically, this has been viewed as a downstream response to primary cues controlling differentiation, but a closer inspection of the wall suggests that it may play a much more active role. Specific polymers within the wall can act as substrates for modifications that impact receptor binding, signal mobility, and cell flexibility. Therefore, far from being a static barrier, the cell wall and its constituent polysaccharides can dictate signal transmission and perception, and directly contribute to a cell's capacity to differentiate. In this review, we re-visit the role of plant cell wall-related genes and polysaccharides during various stages of development, with a particular focus on how changes in cell wall machinery accompany the exit of cells from the stem cell niche.
Liu, Hsiang-Chin; Lämke, Jörn; Lin, Siou-Ying; Hung, Meng-Ju; Liu, Kuan-Ming; Charng, Yee-Yung; Bäurle, Isabel
2018-05-11
Plants can be primed by a stress cue to mount a faster or stronger activation of defense mechanisms upon a subsequent stress. A crucial component of such stress priming is the modified reactivation of genes upon recurring stress; however, the underlying mechanisms are poorly understood. Here, we report that dozens of Arabidopsis thaliana genes display transcriptional memory, i.e. stronger upregulation after a recurring heat stress, that lasts for at least three days. We define a set of transcription factors involved in this memory response and show that the transcriptional memory results in enhanced transcriptional activation within minutes after the onset of a heat stress cue. Further, we show that the transcriptional memory is active in all tissues. It may last for up to a week, and is associated with histone H3 lysine 4 hyper-methylation during this time. This transcriptional memory is cis-encoded, as we identify a promoter fragment that confers memory onto a heterologous gene. In summary, heat-induced transcriptional memory is a widespread and sustained response, and our study provides a framework for future mechanistic studies of somatic stress memory in higher plants. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Rossi, Tullio; Nagelkerken, Ivan; Connell, Sean D.
2016-01-01
The dispersal of larvae and their settlement to suitable habitat is fundamental to the replenishment of marine populations and the communities in which they live. Sound plays an important role in this process because for larvae of various species, it acts as an orientational cue towards suitable settlement habitat. Because marine sounds are largely of biological origin, they not only carry information about the location of potential habitat, but also information about the quality of habitat. While ocean acidification is known to affect a wide range of marine organisms and processes, its effect on marine soundscapes and its reception by navigating oceanic larvae remains unknown. Here, we show that ocean acidification causes a switch in role of present-day soundscapes from attractor to repellent in the auditory preferences in a temperate larval fish. Using natural CO2 vents as analogues of future ocean conditions, we further reveal that ocean acidification can impact marine soundscapes by profoundly diminishing their biological sound production. An altered soundscape poorer in biological cues indirectly penalizes oceanic larvae at settlement stage because both control and CO2-treated fish larvae showed lack of any response to such future soundscapes. These indirect and direct effects of ocean acidification put at risk the complex processes of larval dispersal and settlement. PMID:26763221
Attentional retraining can reduce chocolate consumption.
Kemps, Eva; Tiggemann, Marika; Orr, Jenna; Grear, Justine
2014-03-01
There is emerging evidence that attentional biases are related to the consumption of substances such as alcohol and tobacco, and that attentional bias modification can reduce unwanted consumption of these substances. We present evidence for the first time to our knowledge that the same logical argument applies in the food and eating domain. We conducted two experiments that used a modified dot probe paradigm to train undergraduate women to direct their attention toward ("attend") or away from ("avoid") food cues (i.e., pictures of chocolate). In Experiment 1, attentional bias for chocolate cues increased in the "attend" group, and decreased in the "avoid" group. Experiment 2 showed that these training effects generalized to novel, previously unseen chocolate pictures. Importantly, attentional retraining affected chocolate consumption and craving. In both experiments, participants in the "avoid" group ate less chocolate in a so-called taste test than did those in the "attend" group. In addition, in Experiment 2, but not in Experiment 1, the "attend" group reported stronger chocolate cravings following training, whereas the "avoid" group reported less intense cravings. The results support predictions of cognitive-motivational models of craving and consumption that attentional biases play a causal role in consumption behavior. Furthermore, they present a promising avenue for tackling unwanted food cravings and (over)eating. © 2013 American Psychological Association
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
Rueckauer, Bodo; Delbruck, Tobi
2016-01-01
In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639
Ping, Lichuan; Wang, Ningyuan; Tang, Guofang; Lu, Thomas; Yin, Li; Tu, Wenhe; Fu, Qian-Jie
2017-09-01
Because of limited spectral resolution, Mandarin-speaking cochlear implant (CI) users have difficulty perceiving fundamental frequency (F0) cues that are important to lexical tone recognition. To improve Mandarin tone recognition in CI users, we implemented and evaluated a novel real-time algorithm (C-tone) to enhance the amplitude contour, which is strongly correlated with the F0 contour. The C-tone algorithm was implemented in clinical processors and evaluated in eight users of the Nurotron NSP-60 CI system. Subjects were given 2 weeks of experience with C-tone. Recognition of Chinese tones, monosyllables, and disyllables in quiet was measured with and without the C-tone algorithm. Subjective quality ratings were also obtained for C-tone. After 2 weeks of experience with C-tone, there were small but significant improvements in recognition of lexical tones, monosyllables, and disyllables (P < 0.05 in all cases). Among lexical tones, the largest improvements were observed for Tone 3 (falling-rising) and the smallest for Tone 4 (falling). Improvements with C-tone were greater for disyllables than for monosyllables. Subjective quality ratings showed no strong preference for or against C-tone, except for perception of own voice, where C-tone was preferred. The real-time C-tone algorithm provided small but significant improvements for speech performance in quiet with no change in sound quality. Pre-processing algorithms to reduce noise and better real-time F0 extraction would improve the benefits of C-tone in complex listening environments. Chinese CI users' speech recognition in quiet can be significantly improved by modifying the amplitude contour to better resemble the F0 contour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumarasiri, Akila, E-mail: akumara1@hfhs.org; Siddiqui, Farzan; Liu, Chang
2014-12-15
Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreementmore » of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm{sup 3}. Organs with volumes <3 cm{sup 3} and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm{sup 3}), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours.” Conclusions: Use of DIR-based contour propagation in the routine clinical setting is expected to increase the efficiency of H and N replanning, reducing the amount of time needed for manual target and organ delineations.« less
Experimental Phasing: Substructure Solution and Density Modification as Implemented in SHELX.
Thorn, Andrea
2017-01-01
This chapter describes experimental phasing methods as implemented in SHELX. After introducing fundamental concepts underlying all experimental phasing approaches, the methods used by SHELXC/D/E are described in greater detail, such as dual-space direct methods, Patterson seeding and density modification with the sphere of influence algorithm. Intensity differences from data for experimental phasing can also be used for the generation and usage of difference maps with ANODE for validation and phasing purposes. A short section describes how molecular replacement can be combined with experimental phasing methods. The second half covers practical challenges, such as prerequisites for successful experimental phasing, evaluation of potential solutions, and what to do if substructure search or density modification fails. It is also shown how auto-tracing in SHELXE can improve automation and how it ties in with automatic model building after phasing.
Improvements to the fastex flutter analysis computer code
NASA Technical Reports Server (NTRS)
Taylor, Ronald F.
1987-01-01
Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.
Retrospective Revaluation of Associative Retroactive Cue Interference
Miguez, Gonzalo; Laborda, Mario A.; Miller, Ralph R.
2013-01-01
Two fear-conditioning experiments with rats assessed whether retrospective revaluation, which has been observed in cue competition (i.e., when compounded cues are followed with an outcome), can also be observed in retroactive cue interference (i.e., when different cues are reinforced in separate phases with the same outcome). Experiment 1 found that after inducing retroactive cue interference (i.e., X-outcome followed by A-outcome), nonreinforced presentations of the interfering cue (A) decreases interference with responding to the target cue (X), just as has been observed in retrospective revaluation experiments in cue competition. Using the opposite manipulation (i.e., adding reinforced presentations of A), Experiment 2 demonstrated that after inducing retroactive cue interference, additional reinforced presentations of the interfering cue (A) increases interference with responding to the target cue (X); alternatively stated, the amount of interference increases with the amount of training with the interfering cue. Thus, both types of retrospective revaluation occur in retroactive cue competition. The results are discussed in terms of the possibility that similar associative mechanisms underlie cue competition and cue interference. PMID:24142799
Neural substrates of resisting craving during cigarette cue exposure.
Brody, Arthur L; Mandelkern, Mark A; Olmstead, Richard E; Jou, Jennifer; Tiongson, Emmanuelle; Allen, Valerie; Scheibal, David; London, Edythe D; Monterosso, John R; Tiffany, Stephen T; Korb, Alex; Gan, Joanna J; Cohen, Mark S
2007-09-15
In cigarette smokers, the most commonly reported areas of brain activation during visual cigarette cue exposure are the prefrontal, anterior cingulate, and visual cortices. We sought to determine changes in brain activity in response to cigarette cues when smokers actively resist craving. Forty-two tobacco-dependent smokers underwent functional magnetic resonance imaging, during which they were presented with videotaped cues. Three cue presentation conditions were tested: cigarette cues with subjects allowing themselves to crave (cigarette cue crave), cigarette cues with the instruction to resist craving (cigarette cue resist), and matched neutral cues. Activation was found in the cigarette cue resist (compared with the cigarette cue crave) condition in the left dorsal anterior cingulate cortex (ACC), posterior cingulate cortex (PCC), and precuneus. Lower magnetic resonance signal for the cigarette cue resist condition was found in the cuneus bilaterally, left lateral occipital gyrus, and right postcentral gyrus. These relative activations and deactivations were more robust when the cigarette cue resist condition was compared with the neutral cue condition. Suppressing craving during cigarette cue exposure involves activation of limbic (and related) brain regions and deactivation of primary sensory and motor cortices.
Berryhill, Marian E; Richmond, Lauren L; Shay, Cara S; Olson, Ingrid R
2012-01-01
It is well known that visual working memory (VWM) performance is modulated by attentional cues presented during encoding. Interestingly, retrospective cues presented after encoding, but prior to the test phase also improve performance. This improvement in performance is termed the retro-cue benefit. We investigated whether the retro-cue benefit is sensitive to cue type, whether participants were aware of their improvement in performance due to the retro-cue, and whether the effect was under strategic control. Experiment 1 compared the potential cueing benefits of abrupt onset retro-cues relying on bottom-up attention, number retro-cues relying on top-down attention, and arrow retro-cues, relying on a mixture of both. We found a significant retro-cue effect only for arrow retro-cues. In Experiment 2, we tested participants' awareness of their use of the informative retro-cue and found that they were aware of their improved performance. In Experiment 3, we asked whether participants have strategic control over the retro-cue. The retro-cue was difficult to ignore, suggesting that strategic control is low. The retro-cue effect appears to be within conscious awareness but not under full strategic control.
Image contrast enhancement using adjacent-blocks-based modification for local histogram equalization
NASA Astrophysics Data System (ADS)
Wang, Yang; Pan, Zhibin
2017-11-01
Infrared images usually have some non-ideal characteristics such as weak target-to-background contrast and strong noise. Because of these characteristics, it is necessary to apply the contrast enhancement algorithm to improve the visual quality of infrared images. Histogram equalization (HE) algorithm is a widely used contrast enhancement algorithm due to its effectiveness and simple implementation. But a drawback of HE algorithm is that the local contrast of an image cannot be equally enhanced. Local histogram equalization algorithms are proved to be the effective techniques for local image contrast enhancement. However, over-enhancement of noise and artifacts can be easily found in the local histogram equalization enhanced images. In this paper, a new contrast enhancement technique based on local histogram equalization algorithm is proposed to overcome the drawbacks mentioned above. The input images are segmented into three kinds of overlapped sub-blocks using the gradients of them. To overcome the over-enhancement effect, the histograms of these sub-blocks are then modified by adjacent sub-blocks. We pay more attention to improve the contrast of detail information while the brightness of the flat region in these sub-blocks is well preserved. It will be shown that the proposed algorithm outperforms other related algorithms by enhancing the local contrast without introducing over-enhancement effects and additional noise.
Robust algorithm for aligning two-dimensional chromatograms.
Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel
2012-11-06
Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.
Kaliman, Ilya A; Krylov, Anna I
2017-04-30
A new hardware-agnostic contraction algorithm for tensors of arbitrary symmetry and sparsity is presented. The algorithm is implemented as a stand-alone open-source code libxm. This code is also integrated with general tensor library libtensor and with the Q-Chem quantum-chemistry package. An overview of the algorithm, its implementation, and benchmarks are presented. Similarly to other tensor software, the algorithm exploits efficient matrix multiplication libraries and assumes that tensors are stored in a block-tensor form. The distinguishing features of the algorithm are: (i) efficient repackaging of the individual blocks into large matrices and back, which affords efficient graphics processing unit (GPU)-enabled calculations without modifications of higher-level codes; (ii) fully asynchronous data transfer between disk storage and fast memory. The algorithm enables canonical all-electron coupled-cluster and equation-of-motion coupled-cluster calculations with single and double substitutions (CCSD and EOM-CCSD) with over 1000 basis functions on a single quad-GPU machine. We show that the algorithm exhibits predicted theoretical scaling for canonical CCSD calculations, O(N 6 ), irrespective of the data size on disk. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve
2017-12-01
In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.
Meshing complex macro-scale objects into self-assembling bricks
Hacohen, Adar; Hanniel, Iddo; Nikulshin, Yasha; Wolfus, Shuki; Abu-Horowitz, Almogit; Bachelet, Ido
2015-01-01
Self-assembly provides an information-economical route to the fabrication of objects at virtually all scales. However, there is no known algorithm to program self-assembly in macro-scale, solid, complex 3D objects. Here such an algorithm is described, which is inspired by the molecular assembly of DNA, and based on bricks designed by tetrahedral meshing of arbitrary objects. Assembly rules are encoded by topographic cues imprinted on brick faces while attraction between bricks is provided by embedded magnets. The bricks can then be mixed in a container and agitated, leading to properly assembled objects at high yields and zero errors. The system and its assembly dynamics were characterized by video and audio analysis, enabling the precise time- and space-resolved characterization of its performance and accuracy. Improved designs inspired by our system could lead to successful implementation of self-assembly at the macro-scale, allowing rapid, on-demand fabrication of objects without the need for assembly lines. PMID:26226488
Flow Navigation by Smart Microswimmers via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Colabrese, Simona; Gustavsson, Kristian; Celani, Antonio; Biferale, Luca
2017-04-01
Smart active particles can acquire some limited knowledge of the fluid environment from simple mechanical cues and exert a control on their preferred steering direction. Their goal is to learn the best way to navigate by exploiting the underlying flow whenever possible. As an example, we focus our attention on smart gravitactic swimmers. These are active particles whose task is to reach the highest altitude within some time horizon, given the constraints enforced by fluid mechanics. By means of numerical experiments, we show that swimmers indeed learn nearly optimal strategies just by experience. A reinforcement learning algorithm allows particles to learn effective strategies even in difficult situations when, in the absence of control, they would end up being trapped by flow structures. These strategies are highly nontrivial and cannot be easily guessed in advance. This Letter illustrates the potential of reinforcement learning algorithms to model adaptive behavior in complex flows and paves the way towards the engineering of smart microswimmers that solve difficult navigation problems.
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.
Sabooh, M Fazli; Iqbal, Nadeem; Khan, Mukhtaj; Khan, Muslim; Maqbool, H F
2018-05-01
This study examines accurate and efficient computational method for identification of 5-methylcytosine sites in RNA modification. The occurrence of 5-methylcytosine (m 5 C) plays a vital role in a number of biological processes. For better comprehension of the biological functions and mechanism it is necessary to recognize m 5 C sites in RNA precisely. The laboratory techniques and procedures are available to identify m 5 C sites in RNA, but these procedures require a lot of time and resources. This study develops a new computational method for extracting the features of RNA sequence. In this method, first the RNA sequence is encoded via composite feature vector, then, for the selection of discriminate features, the minimum-redundancy-maximum-relevance algorithm was used. Secondly, the classification method used has been based on a support vector machine by using jackknife cross validation test. The suggested method efficiently identifies m 5 C sites from non- m 5 C sites and the outcome of the suggested algorithm is 93.33% with sensitivity of 90.0 and specificity of 96.66 on bench mark datasets. The result exhibits that proposed algorithm shown significant identification performance compared to the existing computational techniques. This study extends the knowledge about the occurrence sites of RNA modification which paves the way for better comprehension of the biological uses and mechanism. Copyright © 2018 Elsevier Ltd. All rights reserved.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
Enhancing scattering images for orientation recovery with diffusion map
Winter, Martin; Saalmann, Ulf; Rost, Jan M.
2016-02-12
We explore the possibility for orientation recovery in single-molecule coherent diffractive imaging with diffusion map. This algorithm approximates the Laplace-Beltrami operator, which we diagonalize with a metric that corresponds to the mapping of Euler angles onto scattering images. While suitable for images of objects with specific properties we show why this approach fails for realistic molecules. Here, we introduce a modification of the form factor in the scattering images which facilitates the orientation recovery and should be suitable for all recovery algorithms based on the distance of individual images. (C) 2016 Optical Society of America
A self-testing dynamic RAM chip
NASA Astrophysics Data System (ADS)
You, Y.; Hayes, J. P.
1985-02-01
A novel approach to making very large dynamic RAM chips self-testing is presented. It is based on two main concepts: on-chip generation of regular test sequences with very high fault coverage, and concurrent testing of storage-cell arrays to reduce overall testing time. The failure modes of a typical 64 K RAM employing one-transistor cells are analyzed to identify their test requirements. A comprehensive test generation algorithm that can be implemented with minimal modification to a standard cell layout is derived. The self-checking peripheral circuits necessary to implement this testing algorithm are described, and the self-testing RAM is briefly evaluated.
UAV Control on the Basis of 3D Landmark Bearing-Only Observations
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-01-01
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394
Adaptive Wing Camber Optimization: A Periodic Perturbation Approach
NASA Technical Reports Server (NTRS)
Espana, Martin; Gilyard, Glenn
1994-01-01
Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.
NASA Astrophysics Data System (ADS)
Darker, Iain T.; Kuo, Paul; Yang, Ming Yuan; Blechko, Anastassia; Grecos, Christos; Makris, Dimitrios; Nebel, Jean-Christophe; Gale, Alastair G.
2009-05-01
Findings from the current UK national research programme, MEDUSA (Multi Environment Deployable Universal Software Application), are presented. MEDUSA brings together two approaches to facilitate the design of an automatic, CCTV-based firearm detection system: psychological-to elicit strategies used by CCTV operators; and machine vision-to identify key cues derived from camera imagery. Potentially effective human- and machine-based strategies have been identified; these will form elements of the final system. The efficacies of these algorithms have been tested on staged CCTV footage in discriminating between firearms and matched distractor objects. Early results indicate the potential for this combined approach.
NASA Runway Incursion Prevention System (RIPS) Dallas-Fort Worth Demonstration Performance Analysis
NASA Technical Reports Server (NTRS)
Cassell, Rick; Evers, Carl; Esche, Jeff; Sleep, Benjamin; Jones, Denise R. (Technical Monitor)
2002-01-01
NASA's Aviation Safety Program Synthetic Vision System project conducted a Runway Incursion Prevention System (RIPS) flight test at the Dallas-Fort Worth International Airport in October 2000. The RIPS research system includes advanced displays, airport surveillance system, data links, positioning system, and alerting algorithms to provide pilots with enhanced situational awareness, supplemental guidance cues, a real-time display of traffic information, and warnings of runway incursions. This report describes the aircraft and ground based runway incursion alerting systems and traffic positioning systems (Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Information Service - Broadcast (TIS-B)). A performance analysis of these systems is also presented.
Applications of conducting polymers and their issues in biomedical engineering
Ravichandran, Rajeswari; Sundarrajan, Subramanian; Venugopal, Jayarama Reddy; Mukherjee, Shayanti; Ramakrishna, Seeram
2010-01-01
Conducting polymers (CPs) have attracted much interest as suitable matrices of biomolecules and have been used to enhance the stability, speed and sensitivity of various biomedical devices. Moreover, CPs are inexpensive, easy to synthesize and versatile because their properties can be readily modulated by (i) surface functionalization techniques and (ii) the use of a wide range of molecules that can be entrapped or used as dopants. This paper discusses the various surface modifications of the CP that can be employed in order to impart physico-chemical and biological guidance cues that promote cell adhesion/proliferation at the polymer–tissue interface. This ability of the CP to induce various cellular mechanisms widens its applications in medical fields and bioengineering. PMID:20610422
Sachse, Silke; Beshel, Jennifer
2016-10-01
All animals must eat in order to survive but first they must successfully locate and appraise food resources in a manner consonant with their needs. To accomplish this, external sensory information, in particular olfactory food cues, need to be detected and appropriately categorized. Recent advances in Drosophila point to the existence of parallel processing circuits within the central brain that encode odor valence, supporting approach and avoidance behaviors. Strikingly, many elements within these neural systems are subject to modification as a function of the fly's satiety state. In this review we describe those advances and their potential impact on the decision to feed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Visually induced self-motion sensation adapts rapidly to left-right reversal of vision
NASA Technical Reports Server (NTRS)
Oman, C. M.; Bock, O. L.
1981-01-01
Three experiments were conducted using 15 adult volunteers with no overt oculomotor or vestibular disorders. In all experiments, left-right vision reversal was achieved using prism goggles, which permitted a binocular field of vision subtending approximately 45 deg horizontally and 28 deg vertically. In all experiments, circularvection (CV) was tested before and immediately after a period of exposure to reversed vision. After one to three hours of active movement while wearing vision-reversing goggles, 10 of 15 (stationary) human subjects viewing a moving stripe display experienced a self-rotation illusion in the same direction as seen stripe motion, rather than in the opposite (normal) direction, demonstrating that the central neural pathways that process visual self-rotation cues can undergo rapid adaptive modification.
Individual differences in working memory capacity predict visual attention allocation.
Bleckley, M Kathryn; Durso, Francis T; Crutchfield, Jerry M; Engle, Randall W; Khanna, Maya M
2003-12-01
To the extent that individual differences in working memory capacity (WMC) reflect differences in attention (Baddeley, 1993; Engle, Kane, & Tuholski, 1999), differences in WMC should predict performance on visual attention tasks. Individuals who scored in the upper and lower quartiles on the OSPAN working memory test performed a modification of Egly and Homa's (1984) selective attention task. In this task, the participants identified a central letter and localized a displaced letter flashed somewhere on one of three concentric rings. When the displaced letter occurred closer to fixation than the cue implied, high-WMC, but not low-WMC, individuals showed a cost in the letter localization task. This suggests that low-WMC participants allocated attention as a spotlight, whereas those with high WMC showed flexible allocation.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Boullis, Antoine; Francis, Frederic; Verheggen, François J
2015-04-01
Insects are highly dependent on odor cues released into the environment to locate conspecifics or food sources. This mechanism is particularly important for insect predators that rely on kairomones released by their prey to detect them. In the context of climate change and, more specifically, modifications in the gas composition of the atmosphere, chemical communication-mediating interactions between phytophagous insect pests, their host plants, and their natural enemies is likely to be impacted. Several reports have indicated that modifications to plants caused by elevated carbon dioxide and ozone concentrations might indirectly affect insect herbivores, with community-level modifications to this group potentially having an indirect influence on higher trophic levels. The vulnerability of agricultural insect pests toward their natural enemies under elevated greenhouse gases concentrations has been frequently reported, but conflicting results have been obtained. This literature review shows that the higher levels of carbon dioxide, as predicted for the coming century, do not enhance the abundance or efficiency of natural enemies to locate hosts or prey in most published studies. Increased ozone levels lead to modifications in herbivore-induced volatile organic compounds (VOCs) released by damaged plants, which may impact the attractiveness of these herbivores to the third trophic level. Furthermore, other oxidative gases (such as SO2 and NO2) tend to reduce the abundance of natural enemies. The impact of changes in atmospheric gas emissions on plant-insect and insect-insect chemical communication has been under-documented, despite the significance of these mechanisms in tritrophic interactions. We conclude by suggesting some further prospects on this topic of research yet to be investigated. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Schoeberl, Tobias; Ansorge, Ulrich
2018-05-15
Prior research suggested that attentional capture by subliminal abrupt onset cues is stimulus driven. In these studies, reacting was faster when a searched-for target appeared at the location of a preceding abrupt onset cue compared to when the same target appeared at a location away from the cue (cueing effect), although the earlier onset of the cue was subliminal, because it appeared as one out of three horizontally aligned placeholders with a lead time that was too short to be noticed by the participants. Because the cueing effects seemed to be independent of top-down search settings for target features, the effect was attributed to stimulus-driven attentional capture. However, prior studies did not investigate if participants experienced the cues as useful temporal warning signals and, therefore, attended to the cues in a top-down way. Here, we tested to which extent search settings based on temporal contingencies between cue and target onset could be responsible for spatial cueing effects. Cueing effects were replicated, and we showed that removing temporal contingencies between cue and target onset did not diminish the cueing effects (Experiments 1 and 2). Neither presenting the cues in the majority of trials after target onset (Experiment 1) nor presenting cue and target unrelated to one another (Experiment 2) led to a significant reduction of the spatial cueing effects. Results thus support the hypothesis that the subliminal cues captured attention in a stimulus-driven way.
Drinkers’ memory bias for alcohol picture cues in explicit and implicit memory tasks
Nguyen-Louie, Tam T.; Buckman, Jennifer F.; Ray, Suchismita
2016-01-01
Background Alcohol cues can bias attention and elicit emotional reactions, especially in drinkers. Yet, little is known about how alcohol cues affect explicit and implicit memory processes, and how memory for alcohol cues is affected by acute alcohol intoxication. Methods Young adult participants (N=161) were randomly assigned to alcohol, placebo, or control beverage conditions. Following beverage consumption, they were shown neutral, emotional and alcohol-related pictures cues. Participants then completed free recall and repetition priming tasks to test explicit and implicit memory, respectively, for picture cues. Average blood alcohol concentration for the alcohol group was 74 ± 13 mg/dl when memory testing began. Two mixed linear model analyses were conducted to examine the effects of beverage condition, picture cue type, and their interaction on explicit and implicit memory. Results Picture cue type and beverage condition each significantly affected explicit recall of picture cues, whereas only picture cue type significantly influenced repetition priming. Individuals in the alcohol condition recalled significantly fewer pictures than those in other conditions, regardless of cue type. Both free recall and repetition priming were greater for emotional and alcohol-related cues compared to neutral picture cues. No interaction effects were detected. Conclusions Young adult drinkers showed enhanced explicit and implicit memory processing of alcohol cues compared to emotionally neutral cues. This enhanced processing for alcohol cues was on par with that seen for positive emotional cues. Acute alcohol intoxication did not alter this preferential memory processing for alcohol cues over neutral cues. PMID:26811126
Interaction between scene-based and array-based contextual cueing.
Rosenbaum, Gail M; Jiang, Yuhong V
2013-07-01
Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.
Combined expectancies: electrophysiological evidence for the adjustment of expectancy effects
Mattler, Uwe; van der Lugt, Arie; Münte, Thomas F
2006-01-01
Background When subjects use cues to prepare for a likely stimulus or a likely response, reaction times are facilitated by valid cues but prolonged by invalid cues. In studies on combined expectancy effects, two cues can independently give information regarding two dimensions of the forthcoming task. In certain situations, cueing effects on one dimension are reduced when the cue on the other dimension is invalid. According to the Adjusted Expectancy Model, cues affect different processing levels and a mechanism is presumed which is sensitive to the validity of early level cues and leads to online adjustment of expectancy effects at later levels. To examine the predictions of this model cueing of stimulus modality was combined with response cueing. Results Behavioral measures showed the interaction of cueing effects. Electrophysiological measures of the lateralized readiness potential (LRP) and the N200 amplitude confirmed the predictions of the model. The LRP showed larger effects of response cues on response activation when modality cues were valid rather than invalid. N200 amplitude was largest with valid modality cues and invalid response cues, medium with invalid modality cues, and smallest with two valid cues. Conclusion Findings support the view that the validity of early level expectancies modulates the effects of late level expectancies, which included response activation and response conflict in the present study. PMID:16674805