Sample records for precisely improving inputs

  1. Precision irrigation for improving crop water management

    USDA-ARS?s Scientific Manuscript database

    Precision irrigation is gaining attention by the agricultural industry as a means to optimize water inputs, reduce environmental degradation from runoff or deep percolation and maintain crop yields. This presenation will discuss the mechanical and software framework of the irrigation scheduling sup...

  2. A decade of precision agriculture impacts on grain yield and yield variation

    USDA-ARS?s Scientific Manuscript database

    Targeting management practices and inputs with precision agriculture has high potential to meet some of the grand challenges of sustainability in the coming century, including simultaneously improving crop yields and reducing environmental impacts. Although the potential is high, few studies have do...

  3. More noise does not mean more precision: A review of Aldenberg and Rorije (2013).

    PubMed

    Fox, David R

    2015-09-01

    This paper provides a critical review of recently published work that suggests that the precision of hazardous concentration estimates from Species Sensitivity Distributions (SSDs) is improved when the uncertainty in the input data is taken into account. Our review confirms that this counter-intuitive result is indeed incorrect. 2015 FRAME.

  4. Spike timing precision of neuronal circuits.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2018-06-01

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  5. 0.75 atoms improve the clock signal of 10,000 atoms

    NASA Astrophysics Data System (ADS)

    Kruse, I.; Lange, K.; Peise, J.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Lisdat, C.; Santos, L.; Smerzi, A.; Klempt, C.

    2017-02-01

    Since the pioneering work of Ramsey, atom interferometers are employed for precision metrology, in particular to measure time and to realize the second. In a classical interferometer, an ensemble of atoms is prepared in one of the two input states, whereas the second one is left empty. In this case, the vacuum noise restricts the precision of the interferometer to the standard quantum limit (SQL). Here, we propose and experimentally demonstrate a novel clock configuration that surpasses the SQL by squeezing the vacuum in the empty input state. We create a squeezed vacuum state containing an average of 0.75 atoms to improve the clock sensitivity of 10,000 atoms by 2.05 dB. The SQL poses a significant limitation for today's microwave fountain clocks, which serve as the main time reference. We evaluate the major technical limitations and challenges for devising a next generation of fountain clocks based on atomic squeezed vacuum.

  6. Improvement of an Atomic Clock using Squeezed Vacuum

    NASA Astrophysics Data System (ADS)

    Kruse, I.; Lange, K.; Peise, J.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Lisdat, C.; Santos, L.; Smerzi, A.; Klempt, C.

    2016-09-01

    Since the pioneering work of Ramsey, atom interferometers are employed for precision metrology, in particular to measure time and to realize the second. In a classical interferometer, an ensemble of atoms is prepared in one of the two input states, whereas the second one is left empty. In this case, the vacuum noise restricts the precision of the interferometer to the standard quantum limit (SQL). Here, we propose and experimentally demonstrate a novel clock configuration that surpasses the SQL by squeezing the vacuum in the empty input state. We create a squeezed vacuum state containing an average of 0.75 atoms to improve the clock sensitivity of 10000 atoms by 2.05-0.37 +0 .34 dB . The SQL poses a significant limitation for today's microwave fountain clocks, which serve as the main time reference. We evaluate the major technical limitations and challenges for devising a next generation of fountain clocks based on atomic squeezed vacuum.

  7. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  8. ONE SHAKE GATE FORMER

    DOEpatents

    Kalibjian, R.; Perez-Mendez, V.

    1957-08-20

    An improved circuit for forming square pulses having substantially short and precise durations is described. The gate forming circuit incorporates a secondary emission R. F. pentode adapted to receive input trigger pulses amd having a positive feedback loop comnected from the dynode to the control grid to maintain conduction in response to trigger pulses. A short circuited pulse delay line is employed to precisely control the conducting time of the tube and a circuit for squelching spurious oscillations is provided in the feedback loop.

  9. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  10. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  11. Fiber optic combiner and duplicator

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The investigation of the possible development of two optical devices, one to take two images as inputs and to present their arithmetic sum as a single output, the other to take one image as input and present two identical images as outputs is described. Significant engineering time was invested in establishing precision fiber optics drawing capabilities, real time monitoring of the fiber size and exact measuring of fiber optics ribbons. Various assembly procedures and tooling designs were investigated and prototype models were built and evaluated that established technical assurance that the device was feasible and could be fabricated. Although the interleaver specification in its entirety was not achieved, the techniques developed in the course of the program improved the quality of images transmitted by fiber optic arrays by at least an order of magnitude. These techniques are already being applied to the manufacture of precise fiber optic components.

  12. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  13. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  14. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, X.

    1996-12-17

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window. 5 figs.

  15. Gated integrator with signal baseline subtraction

    DOEpatents

    Wang, Xucheng

    1996-01-01

    An ultrafast, high precision gated integrator includes an opamp having differential inputs. A signal to be integrated is applied to one of the differential inputs through a first input network, and a signal indicative of the DC offset component of the signal to be integrated is applied to the other of the differential inputs through a second input network. A pair of electronic switches in the first and second input networks define an integrating period when they are closed. The first and second input networks are substantially symmetrically constructed of matched components so that error components introduced by the electronic switches appear symmetrically in both input circuits and, hence, are nullified by the common mode rejection of the integrating opamp. The signal indicative of the DC offset component is provided by a sample and hold circuit actuated as the integrating period begins. The symmetrical configuration of the integrating circuit improves accuracy and speed by balancing out common mode errors, by permitting the use of high speed switching elements and high speed opamps and by permitting the use of a small integrating time constant. The sample and hold circuit substantially eliminates the error caused by the input signal baseline offset during a single integrating window.

  16. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.

  17. The role of precision agriculture for improved nutrient management on farms.

    PubMed

    Hedley, Carolyn

    2015-01-01

    Precision agriculture uses proximal and remote sensor surveys to delineate and monitor within-field variations in soil and crop attributes, guiding variable rate control of inputs, so that in-season management can be responsive, e.g. matching strategic nitrogen fertiliser application to site-specific field conditions. It has the potential to improve production and nutrient use efficiency, ensuring that nutrients do not leach from or accumulate in excessive concentrations in parts of the field, which creates environmental problems. The discipline emerged in the 1980s with the advent of affordable geographic positioning systems (GPS), and has further developed with access to an array of affordable soil and crop sensors, improved computer power and software, and equipment with precision application control, e.g. variable rate fertiliser and irrigation systems. Precision agriculture focusses on improving nutrient use efficiency at the appropriate scale requiring (1) appropriate decision support systems (e.g. digital prescription maps), and (2) equipment capable of varying application at these different scales, e.g. the footprint of a one-irrigation sprinkler or a fertiliser top-dressing aircraft. This article reviews the rapid development of this discipline, and uses New Zealand as a case study example, as it is a country where agriculture drives economic growth. Here, the high yield potentials on often young, variable soils provide opportunities for effective financial return from investment in these new technologies. © 2014 Society of Chemical Industry.

  18. Modification of infant hypothyroidism and phenylketonuria screening program using electronic tools.

    PubMed

    Taheri, Behjat; Haddadpoor, Asefeh; Mirkhalafzadeh, Mahmood; Mazroei, Fariba; Aghdak, Pezhman; Nasri, Mehran; Bahrami, Gholamreza

    2017-01-01

    Congenital hypothyroidism and phenylketonuria (PKU) are the most common cause for preventable mental retardation in infants worldwide. Timely diagnosis and treatment of these disorders can have lasting effects on the mental development of newborns. However, there are several problems at different stages of screening programs that along with imposing heavy costs can reduce the precision of the screening, increasing the chance of undiagnosed cases which in turn can have damaging consequences for the society. Therefore, given these problems and the importance of information systems in facilitating the management and improving the quality of health care the aim of this study was to improve the screening process of hypothyroidism and PKU in infants with the help of electronic resources. The current study is a qualitative, action research designed to improve the quality of screening, services, performance, implementation effectiveness, and management of hypothyroidism and PKU screening program in Isfahan province. To this end, web-based software was designed. Programming was carried out using Delphi.net software and used SQL Server 2008 for database management. Given the weaknesses, problems, and limitations of hypothyroidism and PKU screening program, and the importance of these diseases in a national scale, this study resulted in design of hypothyroidism and PKU screening software for infants in Isfahan province. The inputs and outputs of the software were designed in three levels including Health Care Centers in charge of the screening program, provincial reference lab, and health and treatment network of Isfahan province. Immediate registration of sample data at the time and location of sampling, providing the provincial reference Laboratory and Health Centers of different eparchies with the ability to instantly observe, monitor, and follow-up on the samples at any moment, online verification of samples by reference lab, creating a daily schedule for reference lab, and receiving of the results from analysis equipment; and entering the results into the database without the need for user input are among the features of this software. The implementation of hypothyroidism screening software led to an increase in the quality and efficiency of the screening program; minimized the risk of human error in the process and solved many of the previous limitations of the screening program which were the main goals for implementation of this software. The implementation of this software also resulted in improvement in precision and quality of services provided for these two diseases and better accuracy and precision for data inputs by providing the possibility of entering the sample data at the place and time of sampling which then resulted in the possibility of management based on precise data and also helped develop a comprehensive database and improved the satisfaction of service recipients.

  19. Fiber Scrambling for High Precision Spectrographs

    NASA Astrophysics Data System (ADS)

    Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.

    2011-05-01

    The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.

  20. High precision in protein contact prediction using fully convolutional neural networks and minimal sequence features.

    PubMed

    Jones, David T; Kandathil, Shaun M

    2018-04-26

    In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.

  1. The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns

    PubMed Central

    Florian, Răzvan V.

    2012-01-01

    In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm. PMID:22879876

  2. Towards a dispersive determination of the η and η' transition form factors

    NASA Astrophysics Data System (ADS)

    Kubis, Bastian

    2018-01-01

    We discuss status and prospects of a dispersive analysis of the η and η' transition form factors. Particular focus is put on the various pieces of experimental information that serve as input to such a calculation. These can help improve on the precision of an evaluation of the η and η' pole contributions to hadronic light-by-light scattering in the anomalous magnetic moment of the muon.

  3. Robust estimation of adaptive tensors of curvature by tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  4. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  5. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  6. Compositional Solution Space Quantification for Probabilistic Software Analysis

    NASA Technical Reports Server (NTRS)

    Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem

    2014-01-01

    Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.

  7. Spectral integration in primary auditory cortex attributable to temporally precise convergence of thalamocortical and intracortical input.

    PubMed

    Happel, Max F K; Jeschke, Marcus; Ohl, Frank W

    2010-08-18

    Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.

  8. Participatory System Dynamics Modeling: Increasing Stakeholder Engagement and Precision to Improve Implementation Planning in Systems.

    PubMed

    Zimmerman, Lindsey; Lounsbury, David W; Rosen, Craig S; Kimerling, Rachel; Trafton, Jodie A; Lindley, Steven E

    2016-11-01

    Implementation planning typically incorporates stakeholder input. Quality improvement efforts provide data-based feedback regarding progress. Participatory system dynamics modeling (PSD) triangulates stakeholder expertise, data and simulation of implementation plans prior to attempting change. Frontline staff in one VA outpatient mental health system used PSD to examine policy and procedural "mechanisms" they believe underlie local capacity to implement evidence-based psychotherapies (EBPs) for PTSD and depression. We piloted the PSD process, simulating implementation plans to improve EBP reach. Findings indicate PSD is a feasible, useful strategy for building stakeholder consensus, and may save time and effort as compared to trial-and-error EBP implementation planning.

  9. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    PubMed

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.

  10. Tracing the source of numerical climate model uncertainties in precipitation simulations using a feature-oriented statistical model

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Jones, A. D.; Rhoades, A.

    2017-12-01

    Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.

  11. Precision oncology using a limited number of cells: optimization of whole genome amplification products for sequencing applications.

    PubMed

    Sho, Shonan; Court, Colin M; Winograd, Paul; Lee, Sangjun; Hou, Shuang; Graeber, Thomas G; Tseng, Hsian-Rong; Tomlinson, James S

    2017-07-01

    Sequencing analysis of circulating tumor cells (CTCs) enables "liquid biopsy" to guide precision oncology strategies. However, this requires low-template whole genome amplification (WGA) that is prone to errors and biases from uneven amplifications. Currently, quality control (QC) methods for WGA products, as well as the number of CTCs needed for reliable downstream sequencing, remain poorly defined. We sought to define strategies for selecting and generating optimal WGA products from low-template input as it relates to their potential applications in precision oncology strategies. Single pancreatic cancer cells (HPAF-II) were isolated using laser microdissection. WGA was performed using multiple displacement amplification (MDA), multiple annealing and looping based amplification (MALBAC) and PicoPLEX. Quality of amplified DNA products were assessed using a multiplex/RT-qPCR based method that evaluates for 8-cancer related genes and QC-scores were assigned. We utilized this scoring system to assess the impact of de novo modifications to the WGA protocol. WGA products were subjected to Sanger sequencing, array comparative genomic hybridization (aCGH) and next generation sequencing (NGS) to evaluate their performances in respective downstream analyses providing validation of the QC-score. Single-cell WGA products exhibited a significant sample-to-sample variability in amplified DNA quality as assessed by our 8-gene QC assay. Single-cell WGA products that passed the pre-analysis QC had lower amplification bias and improved aCGH/NGS performance metrics when compared to single-cell WGA products that failed the QC. Increasing the number of cellular input resulted in improved QC-scores overall, but a resultant WGA product that consistently passed the QC step required a starting cellular input of at least 20-cells. Our modified-WGA protocol effectively reduced this number, achieving reproducible high-quality WGA products from ≥5-cells as a starting template. A starting cellular input of 5 to 10-cells amplified using the modified-WGA achieved aCGH and NGS results that closely matched that of unamplified, batch genomic DNA. The modified-WGA protocol coupled with the 8-gene QC serve as an effective strategy to enhance the quality of low-template WGA reactions. Furthermore, a threshold number of 5-10 cells are likely needed for a reliable WGA reaction and product with high fidelity to the original starting template.

  12. Advanced Unmanned Search System (AUSS) Surface Navigation, Underwater Tracking, and Transponder Network Calibration

    DTIC Science & Technology

    1992-09-01

    5 ENTER PULSE REP PERIOD ................................ 900 ENTER RETURN TO TOP LEVEL C-5 26. SBS1 RECEIVER ----- HYDROPHONE ----- HYDRI ...HYDROPHONE ----- HYDRI PRECISION RETURN 1 LEVEL 29. HEADING INPUT ------ GYRO 1 ------ CONTINUE RANGE GATE OFF ----- FILTER OFF RETURN TO TOP LEVEL 30...700 ENTER RETURN TO TOP LEVEL 12. SBSI RECEIVER ------ HYDROPHONE ------ HYDRI PRECISION RETURN 1 LEVEL 13. HEADING INPUT ------ GYRO 1

  13. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. Copyright © 2015 the American Physiological Society.

  14. DS02R1: Improvements to Atomic Bomb Survivors' Input Data and Implementation of Dosimetry System 2002 (DS02) and Resulting Changes in Estimated Doses.

    PubMed

    Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K

    2017-01-01

    Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."

  15. Study on Dynamic Alignment Technology of COIL Resonator

    NASA Astrophysics Data System (ADS)

    Xiong, M. D.; Zou, X. J.; Guo, J. H.; Jia, S. N.; Zhang2, Z. B.

    2006-10-01

    The performance of great power chemical oxygen-iodine laser (COIL) beam is decided mostly by resonator mirror maladjustment and environment vibration. To improve the performance of light beam, an auto-alignment device is used in COIL resonator, the device can keep COIL resonator collimating by adjusting the optical components of resonator. So the coupling model of COIL resonator is present. The multivariable self study fuzzy uncoupling arithmetic and six-dimensional micro drive technology are used to design a six-input-three-output uncoupling controller, resulting in the realization of the high precision dynamic alignment. The experiments indicate that the collimating range of this system is 8 mrad, precision is 5 urad and frequency response is 20Hz, which meet the demand of resonator alignment system.

  16. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps.

    PubMed

    Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan

    2017-01-01

    In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.

  17. Precision absolute-value amplifier for a precision voltmeter

    DOEpatents

    Hearn, W.E.; Rondeau, D.J.

    1982-10-19

    Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resistor is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resistor. The output current through the load resistor is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resistor. A second gain determining resistor is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.

  18. Precision absolute value amplifier for a precision voltmeter

    DOEpatents

    Hearn, William E.; Rondeau, Donald J.

    1985-01-01

    Bipolar inputs are afforded by the plus inputs of first and second differential input amplifiers. A first gain determining resister is connected between the minus inputs of the differential amplifiers. First and second diodes are connected between the respective minus inputs and the respective outputs of the differential amplifiers. First and second FETs have their gates connected to the outputs of the amplifiers, while their respective source and drain circuits are connected between the respective minus inputs and an output lead extending to a load resister. The output current through the load resister is proportional to the absolute value of the input voltage difference between the bipolar input terminals. A third differential amplifier has its plus input terminal connected to the load resister. A second gain determining resister is connected between the minus input of the third differential amplifier and a voltage source. A third FET has its gate connected to the output of the third amplifier. The source and drain circuit of the third transistor is connected between the minus input of the third amplifier and a voltage-frequency converter, constituting an output device. A polarity detector is also provided, comprising a pair of transistors having their inputs connected to the outputs of the first and second differential amplifiers. The outputs of the polarity detector are connected to gates which switch the output of the voltage-frequency converter between up and down counting outputs.

  19. Long-term impact of precision agriculture on a farmer’s field

    USDA-ARS?s Scientific Manuscript database

    Targeting management practices and inputs with precision agriculture has high potential to meet some of the grand challenges of sustainability in the coming century. Although potential is high, few studies have documented long-term effects of precision agriculture on crop production and environmenta...

  20. Update of patient-specific maxillofacial implant.

    PubMed

    Owusu, James A; Boahene, Kofi

    2015-08-01

    Patient-specific implant (PSI) is a personalized approach to reconstructive and esthetic surgery. This is particularly useful in maxillofacial surgery in which restoring the complex three-dimensional (3D) contour can be quite challenging. In certain situations, the best results can only be achieved with implants custom-made to fit a particular need. Significant progress has been made over the past decade in the design and manufacture of maxillofacial PSIs. Computer-aided design (CAD)/computer-aided manufacturing (CAM) technology is rapidly advancing and has provided new options for fabrication of PSIs with better precision. Maxillofacial PSIs can now be designed using preoperative imaging data as input into CAD software. The designed implant is then fabricated using a CAM technique such as 3D printing. This approach increases precision and decreases or completely eliminates the need for intraoperative modification of implants. The use of CAD/CAM-produced PSIs for maxillofacial reconstruction and augmentation can significantly improve contour outcomes and decrease operating time. CAD/CAM technology allows timely and precise fabrication of maxillofacial PSIs. This approach is gaining increasing popularity in maxillofacial reconstructive surgery. Continued advances in CAD technology and 3D printing are bound to improve the cost-effectiveness and decrease the production time of maxillofacial PSIs.

  1. High precision triangular waveform generator

    DOEpatents

    Mueller, Theodore R.

    1983-01-01

    An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  2. High-precision triangular-waveform generator

    DOEpatents

    Mueller, T.R.

    1981-11-14

    An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  3. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  4. Towards a dispersive determination of the pion transition form factor

    NASA Astrophysics Data System (ADS)

    Leupold, Stefan; Hoferichter, Martin; Kubis, Bastian; Niecknig, Franz; Schneider, Sebastian P.

    2018-01-01

    We start with a brief motivation why the pion transition form factor is interesting and, in particular, how it is related to the high-precision standard-model calculation of the gyromagnetic ratio of the muon. Then we report on the current status of our ongoing project to calculate the pion transition form factor using dispersion theory. Finally we present and discuss a wish list of experimental data that would help to improve the input for our calculations and/or to cross-check our results.

  5. Precision of DVC approaches for strain analysis in bone imaged with μCT at different dimensional levels.

    NASA Astrophysics Data System (ADS)

    Dall'Ara, Enrico; Peña-Fernández, Marta; Palanca, Marco; Giorgi, Mario; Cristofolini, Luca; Tozzi, Gianluca

    2017-11-01

    Accurate measurement of local strain in heterogeneous and anisotropic bone tissue is fundamental to understand the pathophysiology of musculoskeletal diseases, to evaluate the effect of interventions from preclinical studies, and to optimize the design and delivery of biomaterials. Digital volume correlation (DVC) can be used to measure the three-dimensional displacement and strain fields from micro-Computed Tomography (µCT) images of loaded specimens. However, this approach is affected by the quality of the input images, by the morphology and density of the tissue under investigation, by the correlation scheme, and by the operational parameters used in the computation. Therefore, for each application the precision of the method should be evaluated. In this paper we present the results collected from datasets analyzed in previous studies as well as new data from a recent experimental campaign for characterizing the relationship between the precision of two different DVC approaches and the spatial resolution of the outputs. Different bone structures scanned with laboratory source µCT or Synchrotron light µCT (SRµCT) were processed in zero-strain tests to evaluate the precision of the DVC methods as a function of the subvolume size that ranged from 8 to 2500 micrometers. The results confirmed that for every microstructure the precision of DVC improves for larger subvolume size, following power laws. However, for the first time large differences in the precision of both local and global DVC approaches have been highlighted when SRµCT or in vivo µCT images were used instead of conventional ex vivo µCT. These findings suggest that in situ mechanical testing protocols applied in SRµCT facilities should be optimized in order to allow DVC analyses of localized strain measurements. Moreover, for in vivo µCT applications DVC analyses should be performed only with relatively course spatial resolution for achieving a reasonable precision of the method. In conclusion, we have extensively shown that the precision of both tested DVC approaches is affected by different bone structures, different input image resolution and different subvolume sizes. Before each specific application DVC users should always apply a similar approach to find the best compromise between precision and spatial resolution of the measurements.

  6. A pilot's assessment of helicopter handling-quality factors common to both agility and instrument flying tasks

    NASA Technical Reports Server (NTRS)

    Gerdes, R. M.

    1980-01-01

    A series of simulation and flight investigations were undertaken to evaluate helicopter flying qualities and the effects of control system augmentation for nap-of-the-Earth (NOE) agility and instrument flying tasks. Handling quality factors common to both tasks were identified. Precise attitude control was determined to be a key requirement for successful accomplishment of both tasks. Factors that degraded attitude controllability were improper levels of control sensitivity and damping, and rotor system cross coupling due to helicopter angular rate and collective pitch input. Application of rate command, attitude command, and control input decouple augmentation schemes enhanced attitude control and significantly improved handling qualities for both tasks. The NOE agility and instrument flying handling quality considerations, pilot rating philosophy, and supplemental flight evaluations are also discussed.

  7. Testing and evaluation of the LES-6 pulsed plasma thruster by means of a torsion pendulum system

    NASA Technical Reports Server (NTRS)

    Hamidian, J. P.; Dahlgren, J. B.

    1973-01-01

    Performance characteristics of the LES-6 pulsed plasma thruster over a range of input conditions were investigated by means of a torsion pendulum system. Parameters of particular interest included the impulse bit and time average thrust (and their repeatability), specific impulse, mass ablated per discharge, specific thrust, energy per unit area, efficiency, and variation of performance with ignition command rate. Intermittency of the thruster as affected by input energy and igniter resistance were also investigated. Comparative experimental data correlation with the data presented. The results of these tests indicate that the LES-6 thruster, with some identifiable design improvements, represents an attractive reaction control thruster for attitude contol applications on long-life spacecraft requiring small metered impulse bits for precise pointing control of science instruments.

  8. NASA Earth Science Research Results for Improved Regional Crop Yield Prediction

    NASA Astrophysics Data System (ADS)

    Mali, P.; O'Hara, C. G.; Shrestha, B.; Sinclair, T. R.; G de Goncalves, L. G.; Salado Navarro, L. R.

    2007-12-01

    National agencies such as USDA Foreign Agricultural Service (FAS), Production Estimation and Crop Assessment Division (PECAD) work specifically to analyze and generate timely crop yield estimates that help define national as well as global food policies. The USDA/FAS/PECAD utilizes a Decision Support System (DSS) called CADRE (Crop Condition and Data Retrieval Evaluation) mainly through an automated database management system that integrates various meteorological datasets, crop and soil models, and remote sensing data; providing significant contribution to the national and international crop production estimates. The "Sinclair" soybean growth model has been used inside CADRE DSS as one of the crop models. This project uses Sinclair model (a semi-mechanistic crop growth model) for its potential to be effectively used in a geo-processing environment with remote-sensing-based inputs. The main objective of this proposed work is to verify, validate and benchmark current and future NASA earth science research results for the benefit in the operational decision making process of the PECAD/CADRE DSS. For this purpose, the NASA South American Land Data Assimilation System (SALDAS) meteorological dataset is tested for its applicability as a surrogate meteorological input in the Sinclair model meteorological input requirements. Similarly, NASA sensor MODIS products is tested for its applicability in the improvement of the crop yield prediction through improving precision of planting date estimation, plant vigor and growth monitoring. The project also analyzes simulated Visible/Infrared Imager/Radiometer Suite (VIIRS, a future NASA sensor) vegetation product for its applicability in crop growth prediction to accelerate the process of transition of VIIRS research results for the operational use of USDA/FAS/PECAD DSS. The research results will help in providing improved decision making capacity to the USDA/FAS/PECAD DSS through improved vegetation growth monitoring from high spatial and temporal resolution remote sensing datasets; improved time-series meteorological inputs required for crop growth models; and regional prediction capability through geo-processing-based yield modeling.

  9. Probing New Long-Range Interactions by Isotope Shift Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berengut, Julian C.; Budker, Dmitry; Delaunay, Cédric

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca + data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve themore » relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.« less

  10. Probing New Long-Range Interactions by Isotope Shift Spectroscopy.

    PubMed

    Berengut, Julian C; Budker, Dmitry; Delaunay, Cédric; Flambaum, Victor V; Frugiuele, Claudia; Fuchs, Elina; Grojean, Christophe; Harnik, Roni; Ozeri, Roee; Perez, Gilad; Soreq, Yotam

    2018-03-02

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca^{+} data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve the relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.

  11. Exploring Flavor Physics with Lattice QCD

    NASA Astrophysics Data System (ADS)

    Du, Daping; Fermilab/MILC Collaborations Collaboration

    2016-03-01

    The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.

  12. Probing New Long-Range Interactions by Isotope Shift Spectroscopy

    DOE PAGES

    Berengut, Julian C.; Budker, Dmitry; Delaunay, Cédric; ...

    2018-02-26

    We explore a method to probe new long- and intermediate-range interactions using precision atomic isotope shift spectroscopy. We develop a formalism to interpret linear King plots as bounds on new physics with minimal theory inputs. We focus only on bounding the new physics contributions that can be calculated independently of the standard model nuclear effects. We apply our method to existing Ca + data and project its sensitivity to conjectured new bosons with spin-independent couplings to the electron and the neutron using narrow transitions in other atoms and ions, specifically, Sr and Yb. Future measurements are expected to improve themore » relative precision by 5 orders of magnitude, and they can potentially lead to an unprecedented sensitivity for bosons within the 0.3 to 10 MeV mass range.« less

  13. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.

  14. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  15. Experimental Studies of Nuclear Physics Input for γ -Process Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Scholz, Philipp; Heim, Felix; Mayer, Jan; Netterdon, Lars; Zilges, Andreas

    The predictions of reaction rates for the γ process in the scope of the Hauser-Feshbach statistical model crucially depend on nuclear physics input-parameters as optical-model potentials (OMP) or γ -ray strength functions. Precise cross-section measurements at astrophysically relevant energies help to constrain adopted models and, therefore, to reduce the uncertainties in the theoretically predicted reaction rates. During the last years, several cross-sections of charged-particle induced reactions on heavy nuclei have been measured at the University of Cologne. Either by means of the in-beam method at the HORUS γ -ray spectrometer or the activation technique using the Cologne Clover Counting Setup, total and partial cross-sections could be used to further constrain different models for nuclear physics input-parameters. It could be shown that modifications on the α -OMP in the case of the 112Sn(α , γ ) reaction also improve the description of the recently measured cross sections of the 108Cd(α , γ ) and 108Cd(α , n) reaction and other reactions as well. Partial cross-sections of the 92Mo(p, γ ) reaction were used to improve the γ -strength function model in 93Tc in the same way as it was done for the 89Y(p, γ ) reaction.

  16. A pilot's assessment of helicopter handling-quality factors common to both agility and instrument flying tasks

    NASA Technical Reports Server (NTRS)

    Gerdes, R. M.

    1980-01-01

    Results from a series of simulation and flight investigations undertaken to evaluate helicopter flying qualities and the effects of control system augmentation for nap-of-the-earth (NOE) agility and instrument flying tasks were analyzed to assess handling-quality factors common to both tasks. Precise attitude control was determined to be a key requirement for successful accomplishment of both tasks. Factors that degraded attitude controllability were improper levels of control sensitivity and damping and rotor-system cross-coupling due to helicopter angular rate and collective pitch input. Application of rate-command, attitude-command, and control-input decouple augmentation schemes enhanced attitude control and significantly improved handling qualities for both tasks. NOE agility and instrument flying handling-quality considerations, pilot rating philosophy, and supplemental flight evaluations are also discussed.

  17. Quantification of regional myocardial blood flow estimation with three-dimensional dynamic rubidium-82 PET and modified spillover correction model.

    PubMed

    Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara

    2012-08-01

    Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.

  18. A GPS coverage model

    NASA Technical Reports Server (NTRS)

    Skidmore, Trent A.

    1994-01-01

    The results of several case studies using the Global Positioning System coverage model developed at Ohio University are summarized. Presented are results pertaining to outage area, outage dynamics, and availability. Input parameters to the model include the satellite orbit data, service area of interest, geometry requirements, and horizon and antenna mask angles. It is shown for precision-landing Category 1 requirements that the planned GPS 21 Primary Satellite Constellation produces significant outage area and unavailability. It is also shown that a decrease in the user equivalent range error dramatically decreases outage area and improves the service availability.

  19. Study on initiative vibration absorbing technology of optics in strong disturbed environment

    NASA Astrophysics Data System (ADS)

    Jia, Si-nan; Xiong, Mu-di; Zou, Xiao-jie

    2007-12-01

    Strong disturbed environment is apt to cause irregular vibration, which seriously affects optical collimation. To improve the performance of laser beam, three-point dynamic vibration absorbing method is proposed, and laser beam initiative vibration absorbing system is designed. The maladjustment signal is detected by position sensitive device (PSD), three groups of PZT are driven to adjust optical element in real-time, so the performance of output-beam is improved. The coupling model of the system is presented. Multivariable adaptive closed-loop decoupling arithmetic is used to design three-input-three-output decoupling controller, so that high precision dynamic adjusting is realized. Experiments indicate that the system has good shock absorbing efficiency.

  20. Polymorphic Contracts

    NASA Astrophysics Data System (ADS)

    Belo, João Filipe; Greenberg, Michael; Igarashi, Atsushi; Pierce, Benjamin C.

    Manifest contracts track precise properties by refining types with predicates - e.g., {x : Int |x > 0 } denotes the positive integers. Contracts and polymorphism make a natural combination: programmers can give strong contracts to abstract types, precisely stating pre- and post-conditions while hiding implementation details - for example, an abstract type of stacks might specify that the pop operation has input type {x :α Stack |not ( empty x )} . We formalize this combination by defining FH, a polymorphic calculus with manifest contracts, and establishing fundamental properties including type soundness and relational parametricity. Our development relies on a significant technical improvement over earlier presentations of contracts: instead of introducing a denotational model to break a problematic circularity between typing, subtyping, and evaluation, we develop the metatheory of contracts in a completely syntactic fashion, omitting subtyping from the core system and recovering it post facto as a derived property.

  1. Computing Generalized Matrix Inverse on Spiking Neural Substrate.

    PubMed

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  2. Frequency downconversion and phase noise in MIT.

    PubMed

    Watson, S; Williams, R J; Griffiths, H; Gough, W; Morris, A

    2002-02-01

    High-frequency (3-30 MHz) operation of MIT systems offers advantages in terms of the larger induced signal amplitudes compared to systems operating in the low- or medium-frequency ranges. Signal distribution at HF, however, presents difficulties, in particular with isolation and phase stability. It is therefore valuable to translate received signals to a lower frequency range through heterodyne downconversion, a process in which relative signal amplitude and phase information is in theory retained. Measurement of signal amplitude and phase is also simplified at lower frequencies. The paper presents details of measurements on a direct phase measurement system utilizing heterodyne downconversion and compares the relative performance of three circuit configurations. The 100-sample average precision of a circuit suitable for use as a receiver within an MIT system was 0.008 degrees for input amplitude -21 dBV. As the input amplitude was reduced from -21 to -72 dBV variation in the measured phase offset was observed, with the offset varying by 1.8 degrees. The precision of the circuit deteriorated with decreasing input amplitude, but was found to provide a 100-sample average precision of <0.022 degrees down to an input amplitude of -60 dBV. The characteristics of phase noise within the system are discussed.

  3. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. A Summary of Revisions Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.; Perry, Boyd, III; Silva, Walter A.; Newman, Brett

    2014-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees-of - freedom and allows for the calculation of various airplane responses due to a discrete one-minus- cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and additional output data so as to provide a more useful and precise tool for gust load analysis. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs is included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  5. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  6. Hardron production and neutrino beams

    NASA Astrophysics Data System (ADS)

    Guglielmi, A.

    2006-11-01

    The precise measurements of the neutrino mixing parameters in the oscillation experiments at accelerators require new high-intensity and high-purity neutrino beams. Ancillary hadron-production measurements are then needed as inputs to precise calculation of neutrino beams and of atmospheric neutrino fluxes.

  7. Teleportation of quantum resources and quantum Fisher information under Unruh effect

    NASA Astrophysics Data System (ADS)

    Jafarzadeh, M.; Rangani Jahromi, H.; Amniat-Talab, M.

    2018-07-01

    Considering a pair of Unruh-DeWitt detectors, when one of them is kept inertial and the other one is accelerated and coupled to a scalar field, we address the teleportation of a two-qubit entangled state ( |ψ _in> = {cos} θ /2 |10> +e^{iφ} {sin} θ /2 |01> ) through the quantum channel created by the above system and investigate how thermal noise induced by Unruh effect affects the quantum resources and quantum Fisher information (QFI) teleportation. Our results showed while the teleported quantum resources and QFI with respect to phase parameter φ( F_{ {out}}( φ ) ) reduce with increasing acceleration and effective coupling, QFI with respect to weight parameter θ ( F_{ {out}}( θ ) ) interestingly increases after a specified value of acceleration and effective coupling. We also find that the teleported quantum resources and the precision of estimating phase parameter φ can be improved by a more entangled input state and more entangled channel. Moreover, the precision of estimating weight parameter θ increases for a maximally entangled input state only in large acceleration regime, while it does not change considerably for both maximally and partially entangled states of the channel. The influence of Unruh effect on fidelity of teleportation is also investigated. We showed that for small effective coupling the average fidelity is always larger than 2/3.

  8. Irrigation timing and volume affects growth of container grown maples

    USDA-ARS?s Scientific Manuscript database

    Container nursery production requires large inputs of water and nutrients but frequently irrigation inputs exceed plant demand and lack application precision or are not applied at optimal times for plant production. The results from this research can assist producers in developing irrigation manage...

  9. Forecasting tidal marsh elevation and habitat change through fusion of Earth observations and a process model

    USGS Publications Warehouse

    Byrd, Kristin B.; Windham-Myers, Lisamarie; Leeuw, Thomas; Downing, Bryan D.; Morris, James T.; Ferner, Matthew C.

    2016-01-01

    Reducing uncertainty in data inputs at relevant spatial scales can improve tidal marsh forecasting models, and their usefulness in coastal climate change adaptation decisions. The Marsh Equilibrium Model (MEM), a one-dimensional mechanistic elevation model, incorporates feedbacks of organic and inorganic inputs to project elevations under sea-level rise scenarios. We tested the feasibility of deriving two key MEM inputs—average annual suspended sediment concentration (SSC) and aboveground peak biomass—from remote sensing data in order to apply MEM across a broader geographic region. We analyzed the precision and representativeness (spatial distribution) of these remote sensing inputs to improve understanding of our study region, a brackish tidal marsh in San Francisco Bay, and to test the applicable spatial extent for coastal modeling. We compared biomass and SSC models derived from Landsat 8, DigitalGlobe WorldView-2, and hyperspectral airborne imagery. Landsat 8-derived inputs were evaluated in a MEM sensitivity analysis. Biomass models were comparable although peak biomass from Landsat 8 best matched field-measured values. The Portable Remote Imaging Spectrometer SSC model was most accurate, although a Landsat 8 time series provided annual average SSC estimates. Landsat 8-measured peak biomass values were randomly distributed, and annual average SSC (30 mg/L) was well represented in the main channels (IQR: 29–32 mg/L), illustrating the suitability of these inputs across the model domain. Trend response surface analysis identified significant diversion between field and remote sensing-based model runs at 60 yr due to model sensitivity at the marsh edge (80–140 cm NAVD88), although at 100 yr, elevation forecasts differed less than 10 cm across 97% of the marsh surface (150–200 cm NAVD88). Results demonstrate the utility of Landsat 8 for landscape-scale tidal marsh elevation projections due to its comparable performance with the other sensors, temporal frequency, and cost. Integration of remote sensing data with MEM should advance regional projections of marsh vegetation change by better parameterizing MEM inputs spatially. Improving information for coastal modeling will support planning for ecosystem services, including habitat, carbon storage, and flood protection.

  10. Digital Ratiometer

    NASA Technical Reports Server (NTRS)

    Beer, R.

    1985-01-01

    Small, low-cost comparator with 24-bit-precision yields ratio signal from pair of analog or digital input signals. Arithmetic logic chips (bit-slice) sample two 24-bit analog-to-digital converters approximately once every millisecond and accumulate them in two 24-bit registers. Approach readily modified to arbitrary precision.

  11. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  12. Equipment and technology in surgical robotics.

    PubMed

    Sim, Hong Gee; Yip, Sidney Kam Hung; Cheng, Christopher Wai Sam

    2006-06-01

    Contemporary medical robotic systems used in urologic surgery usually consist of a computer and a mechanical device to carry out the designated task with an image acquisition module. These systems are typically from one of the two categories: offline or online robots. Offline robots, also known as fixed path robots, are completely automated with pre-programmed motion planning based on pre-operative imaging studies where precise movements within set confines are carried out. Online robotic systems rely on continuous input from the surgeons and change their movements and actions according to the input in real time. This class of robots is further divided into endoscopic manipulators and master-slave robotic systems. Current robotic surgical systems have resulted in a paradigm shift in the minimally invasive approach to complex laparoscopic urological procedures. Future developments will focus on refining haptic feedback, system miniaturization and improved augmented reality and telesurgical capabilities.

  13. Multilayered analog optical differentiating device: performance analysis on structural parameters.

    PubMed

    Wu, Wenhui; Jiang, Wei; Yang, Jiang; Gong, Shaoxiang; Ma, Yungui

    2017-12-15

    Analogy optical devices (AODs) able to do mathematical computations have recently gained strong research interest for their potential applications as accelerating hardware in traditional electronic computers. The performance of these wavefront-processing devices is primarily decided by the accuracy of the angular spectral engineering. In this Letter, we show that the multilayer technique could be a promising method to flexibly design AODs according to the input wavefront conditions. As examples, various Si-SiO 2 -based multilayer films are designed that can precisely perform the second-order differentiation for the input wavefronts of different Fourier spectrum widths. The minimum number and thickness uncertainty of sublayers for the device performance are discussed. A technique by rescaling the Fourier spectrum intensity has been proposed in order to further improve the practical feasibility. These results are thought to be instrumental for the development of AODs.

  14. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936

  15. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  16. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    NASA Astrophysics Data System (ADS)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  17. A modular multiple use system for precise time and frequency measurement and distribution

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.; Adams, W. S.; Lee, G. M.; Bush, R. L.

    1978-01-01

    A modular CAMAC based system is described which was developed to meet a variety of precise time and frequency measurement and distribution needs. The system was based on a generalization of the dual mixer concept. By using a 16 channel 100 ns event clock, the system can intercompare the phase of 16 frequency standards with subpicosecond resolution. The system has a noise floor of 26 fs and a long term stability on the order of 1 ps or better. The system also used a digitally controlled crystal oscillator in a control loop to provide an offsettable 5 MHz output with subpicosecond phase tracking capability. A detailed description of the system is given including theory of operation and performance. A method to improve the performance of the dual mixer technique is discussed when phase balancing of the two input ports cannot be accomplished.

  18. Radio-frequency power-assisted performance improvement of a magnetohydrodynamic power generator

    NASA Astrophysics Data System (ADS)

    Murakami, Tomoyuki; Okuno, Yoshihiro; Yamasaki, Hiroyuki

    2005-12-01

    We describe a radio-frequency (rf) electromagnetic-field-assisted magnetohydrodynamic power generation experiment, where an inductively coupled rf field (13.56MHz, 5.2kW) is continuously supplied to the disk generator. The rf power assists the precise plasma ignition, by which the otherwise irregular plasma behavior was stabilized. The rf heating suppresses the ionization instability in the plasma behavior and homogenizes the nonuniformity of the plasma structures. The power-generating performance is significantly improved with the aid of the rf power under wide seeding conditions: insufficient, optimum, and excessive seed fractions. The increment of the enthalpy extraction ratio of around 2% is significantly greater than the fraction of the net rf power, that is, 0.16%, to the thermal input.

  19. Swing arm profilometer: high accuracy testing for large reaction-bonded silicon carbide optics with a capacitive probe

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun

    2017-08-01

    A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.

  20. Diagnosis of the Computer-Controlled Milling Machine, Definition of the Working Errors and Input Corrections on the Basis of Mathematical Model

    NASA Astrophysics Data System (ADS)

    Starikov, A. I.; Nekrasov, R. Yu; Teploukhov, O. J.; Soloviev, I. V.; Narikov, K. A.

    2016-10-01

    Manufactures, machinery and equipment improve of constructively as science advances and technology, and requirements are improving of quality and longevity. That is, the requirements for surface quality and precision manufacturing, oil and gas equipment parts are constantly increasing. Production of oil and gas engineering products on modern machine tools with computer numerical control - is a complex synthesis of technical and electrical equipment parts, as well as the processing procedure. Technical machine part wears during operation and in the electrical part are accumulated mathematical errors. Thus, the above-mentioned disadvantages of any of the following parts of metalworking equipment affect the manufacturing process of products in general, and as a result lead to the flaw.

  1. Environmental Loss Characterization of an Advanced Stirling Convertor (ASC-E2) Insulation Package Using a Mock Heater Head

    NASA Technical Reports Server (NTRS)

    Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a specified electrical power output for a given net heat input. While electrical power output can be precisely quantified, thermal power input to the Stirling cycle cannot be directly measured. In an effort to improve net heat input predictions, the Mock Heater Head was developed with the same relative thermal paths as a convertor using a conducting rod to represent the Stirling cycle and tested to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. The Mock Heater Head also served as the pathfinder for a higher fidelity version of validation test hardware, known as the Thermal Standard. This paper describes how the Mock Heater Head was tested and utilized to validate a process for the Thermal Standard.

  2. ASDTIC: A feedback control innovation

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (Analog Signal to Discrete Time Interval Converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  3. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  4. ASDTIC - A feedback control innovation.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Schoenfeld, A. D.

    1972-01-01

    The ASDTIC (analog signal to discrete time interval converter) control subsystem provides precise output control of high performance aerospace power supplies. The key to ASDTIC operation is that it stably controls output by sensing output energy change as well as output magnitude. The ASDTIC control subsystem and control module were developed to improve power supply performance during static and dynamic input voltage and output load variations, to reduce output voltage or current regulation due to component variations or aging, to maintain a stable feedback control with variations in the loop gain or loop time constants, and to standardize the feedback control subsystem for power conditioning equipment.

  5. High-performance, multi-faceted research sonar electronics

    NASA Astrophysics Data System (ADS)

    Moseley, Julian W.

    This thesis describes the design, implementation and testing of a research sonar system capable of performing complex applications such as coherent Doppler measurement and synthetic aperture imaging. Specifically, this thesis presents an approach to improve the precision of the timing control and increase the signal-to-noise ratio of an existing research sonar. A dedicated timing control subsystem, and hardware drivers are designed to improve the efficiency of the old sonar's timing operations. A low noise preamplifier is designed to reduce the noise component in the received signal arriving at the input of the system's data acquisition board. Noise analysis, frequency response, and timing simulation data are generated in order to predict the functionality and performance improvements expected when the subsystems are implemented. Experimental data, gathered using these subsys- tems, are presented, and are shown to closely match the simulation results, thus verifying performance.

  6. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  7. A Field-Programmable Gate Array (FPGA) TDC for the Fermilab SeaQuest (E906) Experiment and Its Test with a Novel External Wave Union Launcher

    NASA Astrophysics Data System (ADS)

    Wang, Su-Yin; Wu, Jinyuan; Yao, Shi-Hong; Chang, Wen-Chen

    2014-12-01

    We developed a field-programmable gate array (FPGA) TDC module for the tracking detectors of the Fermilab SeaQuest (E906) experiment, including drift chambers, proportional tubes, and hodoscopes. This 64-channel TDC module had a 6U VMEbus form factor and was equipped with a low-power, radiation-hardened Microsemi ProASIC3 Flash-based FPGA. The design of the new FPGA firmware (Run2-TDC) aimed to reduce the data volume and data acquisition (DAQ) deadtime. The firmware digitized multiple input hits of both polarities while allowing users to turn on a multiple-hit elimination logic to remove after-pulses in the wire chambers and proportional tubes. A scaler was implemented in the firmware to allow for recording the number of hits in each channel. The TDC resolution was determined by an internal cell delay of 450 ps. A measurement precision of 200 ps was achieved. We used five kinds of tests to ensure the qualification of 93 TDCs in mass production. We utilized the external wave union launcher in our test to improve the TDC's measurement precision and also to illustrate how to construct the Wave Union TDC using an existing multi-hit TDC without modifying its firmware. Measurement precision was improved by a factor of about two (108 ps) based on the four-edge wave union. Better measurement precision (69 ps) was achieved by combining the approaches of Wave Union TDC and multiple-channel ganging.

  8. Precision matters for position decoding in the early fly embryo

    NASA Astrophysics Data System (ADS)

    Petkova, Mariela D.; Tkacik, Gasper; Wieschaus, Eric F.; Bialek, William; Gregor, Thomas

    Genetic networks can determine cell fates in multicellular organisms with precision that often reaches the physical limits of the system. However, it is unclear how the organism uses this precision and whether it has biological content. Here we address this question in the developing fly embryo, in which a genetic network of patterning genes reaches 1% precision in positioning cells along the embryo axis. The network consists of three interconnected layers: an input layer of maternal gradients, a processing layer of gap genes, and an output layer of pair-rule genes with seven-striped patterns. From measurements of gap gene protein expression in hundreds of wild-type embryos we construct a ``decoder'', which is a look-up table that determines cellular positions from the concentration means, variances and co-variances. When we apply the decoder to measurements in mutant embryos lacking various combinations of the maternal inputs, we predict quantitative changes in the output layer such as missing, altered or displaced stripes. We confirm these predictions by measuring pair-rule expression in the mutant embryos. Our results thereby show that the precision of the patterning network is biologically meaningful and a necessary feature for decoding cell positions in the early fly embryo.

  9. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  10. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    PubMed Central

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  11. Hox2 Genes Are Required for Tonotopic Map Precision and Sound Discrimination in the Mouse Auditory Brainstem.

    PubMed

    Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M

    2017-01-03

    Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Optical timing receiver for the NASA laser ranging system. Part 2: High precision time interval digitizer

    NASA Technical Reports Server (NTRS)

    Leskovar, B.; Turko, B.

    1977-01-01

    The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.

  13. Spike Timing and Reliability in Cortical Pyramidal Neurons: Effects of EPSC Kinetics, Input Synchronization and Background Noise on Spike Timing

    PubMed Central

    Rodriguez-Molina, Victor M.; Aertsen, Ad; Heck, Detlef H.

    2007-01-01

    In vivo studies have shown that neurons in the neocortex can generate action potentials at high temporal precision. The mechanisms controlling timing and reliability of action potential generation in neocortical neurons, however, are still poorly understood. Here we investigated the temporal precision and reliability of spike firing in cortical layer V pyramidal cells at near-threshold membrane potentials. Timing and reliability of spike responses were a function of EPSC kinetics, temporal jitter of population excitatory inputs, and of background synaptic noise. We used somatic current injection to mimic population synaptic input events and measured spike probability and spike time precision (STP), the latter defined as the time window (Δt) holding 80% of response spikes. EPSC rise and decay times were varied over the known physiological spectrum. At spike threshold level, EPSC decay time had a stronger influence on STP than rise time. Generally, STP was highest (≤2.45 ms) in response to synchronous compounds of EPSCs with fast rise and decay kinetics. Compounds with slow EPSC kinetics (decay time constants>6 ms) triggered spikes at lower temporal precision (≥6.58 ms). We found an overall linear relationship between STP and spike delay. The difference in STP between fast and slow compound EPSCs could be reduced by incrementing the amplitude of slow compound EPSCs. The introduction of a temporal jitter to compound EPSCs had a comparatively small effect on STP, with a tenfold increase in jitter resulting in only a five fold decrease in STP. In the presence of simulated synaptic background activity, precisely timed spikes could still be induced by fast EPSCs, but not by slow EPSCs. PMID:17389910

  14. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    PubMed Central

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262

  15. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    PubMed

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  16. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  17. Ozone Profiles and Tropospheric Ozone from Global Ozone Monitoring Experiment

    NASA Technical Reports Server (NTRS)

    Liu, X.; Chance, K.; Sioris, C. E.; Sparr, R. J. D.; Kuregm, T. P.; Martin, R. V.; Newchurch, M. J.; Bhartia, P. K.

    2003-01-01

    Ozone profiles are derived from backscattered radiances in the ultraviolet spectra (290-340 nm) measured by the nadir-viewing Global Ozone Monitoring Experiment using optimal estimation. Tropospheric O3 is directly retrieved with the tropopause as one of the retrieval levels. To optimize the retrieval and improve the fitting precision needed for tropospheric O3, we perform extensive wavelength and radiometric calibrations and improve forward model inputs. Retrieved O3 profiles and tropospheric O3 agree well with coincident ozonesonde measurements, and the integrated total O3 agrees very well with Earth Probe TOMS and Dobson/Brewer total O3. The global distribution of tropospheric O3 clearly shows the influences of biomass burning, convection, and air pollution, and is generally consistent with our current understanding.

  18. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  19. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  20. Detecting the spatial chirp signals by fractional Fourier lens with transformation materials

    NASA Astrophysics Data System (ADS)

    Chen, J.; Hu, J.

    2018-02-01

    Fractional Fourier transform (FrFT) is the general form of the Fourier transform and is an important tool in signal processing. As one typical application of FrFT, detecting the chirp rate (CR, or known as the rate of frequency change) of a chirp signal is important in many optical measurements. The optical FrFT that based on graded index lens fails to detect the high CR chirp because the short wave propagation distance of the impulse in the lens will weaken the paraxial approximation condition. With the help of transformation optics, the improved FrFT lens is proposed to adjust the high CR as well as the impulse location of the given input chirp signal. The designed transformation materials can implement the effect of space compression, making the input chirp signal is equivalent to have lower CR, therefore the system can satisfy the paraxial approximation better. As a result, this lens can improve the detection precision for the high CR. The numerical simulations verified the design. The proposed device may have both theoretical and practical values, and the design demonstrates the ability and flexibility of TO in spatial signal processing.

  1. The Copernicus POD Service and beyond: Scientific exploitation of the orbit-related data and products

    NASA Astrophysics Data System (ADS)

    Peter, Heike; Fernández, Jaime; Fernández, Carlos; Féménias, Pierre

    2017-04-01

    The Copernicus POD (Precise Orbit Determination) Service is part of the Copernicus Processing Data Ground Segment (PDGS) of the Sentinel-1, -2 and -3 missions. A GMV-led consortium is operating the Copernicus POD Service being in charge of generating precise orbital products and auxiliary data files for their use as part of the processing chains of the respective Sentinel PDGS. The orbital products are available through the dedicated Copernicus data hub. The Copernicus POD Service is supported by the Copernicus POD Quality Working Group (QWG) for the validation of the orbit product accuracy. The QWG is delivering independent orbit solutions for the satellites. The cross-comparison of all these orbit solutions is essential to monitor and to improve the orbit accuracy because for Sentinel-1 and -2 this is the only possibility to externally assess the quality of the orbits. Each of the Sentinel-1, -2, and -3 satellites carries dual-frequency GPS receivers delivering the necessary measurements for the precise orbit determination of the satellites. The Sentinel-3 satellites are additionally equipped with a DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) receiver and a Laser Retro Reflector for Satellite Laser Ranging. These two additional observation techniques allow for independent validation of the GPS-derived orbit determination results and for studying biases between the different techniques. The scientific exploitation of the orbit determination and the corresponding input data is manifold. Sophisticated satellite macro models improve the modelling of the non-gravitational forces acting on the satellite. On the other hand, comparisons to orbits based on pure empirical modelling of the non-gravitational forces help to sort out deficiencies in the satellite geometry information. The dual-frequency GPS data delivered by the satellites can give valuable input for ionospheric studies important for Space Weather research. So-called kinematic orbits, being a time series of discrete satellite positions derived from GPS, may be used for the modelling of the time-variable low degree harmonics of the Earth's gravity field. This is very important to support filling the possible gap between the dedicated gravity field missions GRACE and GRACE Follow-on. Many other important research topics could be mentioned here as well. Therefore a broad scientific community could benefit of an open access not only to the operational orbits (which is partially available today), but also to the GPS observations, satellite attitude and other ancillary information to perform POD. This poster presents firstly the status of the Copernicus POD Service in terms of products generated, accuracy and timeliness of the operational orbital products and all potential inputs available. Then the main focus of the poster is to outline the possibilities for scientific exploitation of the orbit determination and the corresponding input data. The great scientific potential of these data is explained to confirm the need of making them publicly available for scientists.

  2. GABAergic circuits control input-spike coupling in the piriform cortex.

    PubMed

    Luna, Victor M; Schoppa, Nathan E

    2008-08-27

    Odor coding in mammals is widely believed to involve synchronized gamma frequency (30-70 Hz) oscillations in the first processing structure, the olfactory bulb. How such inputs are read in downstream cortical structures however is not known. Here we used patch-clamp recordings in rat piriform cortex slices to examine cellular mechanisms that shape how the cortex integrates inputs from bulb mitral cells. Electrical stimulation of mitral cell axons in the lateral olfactory tract (LOT) resulted in excitation of pyramidal cells (PCs), which was followed approximately 10 ms later by inhibition that was highly reproducible between trials in its onset time. This inhibition was somatic in origin and appeared to be driven through a feedforward mechanism, wherein GABAergic interneurons were directly excited by mitral cell axons. The precise inhibition affected action potential firing in PCs in two distinct ways. First, by abruptly terminating PC excitation, it limited the PC response to each EPSP to exactly one, precisely timed action potential. In addition, inhibition limited the summation of EPSPs across time, such that PCs fired action potentials in strong preference for synchronized inputs arriving in a time window of <5 ms. Both mechanisms would help ensure that PCs respond faithfully and selectively to mitral cell inputs arriving as a synchronized gamma frequency pattern.

  3. Improvement of the grid-connect current quality using novel proportional-integral controller for photovoltaic inverters.

    PubMed

    Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing

    2014-02-01

    Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.

  4. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    PubMed

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity.

  5. Input-Dependent Frequency Modulation of Cortical Gamma Oscillations Shapes Spatial Synchronization and Enables Phase Coding

    PubMed Central

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-01-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity. PMID:25679780

  6. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  7. BATMAN: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2017-04-01

    This paper describes the Bayesian Technique for Multi-image Analysis (BATMAN), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical data set containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (I.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real integral-field spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of 2. Our analysis reveals that the algorithm prioritizes conservation of all the statistically significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of BaTMAn is not to be used as a 'black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially resolved data prior to its analysis. The source code is publicly available at http://astro.ft.uam.es/SELGIFS/BaTMAn.

  8. Manipulating Crop Density to Optimize Nitrogen and Water Use: An Application of Precision Agroecology

    NASA Astrophysics Data System (ADS)

    Brown, T. T.; Huggins, D. R.; Smith, J. L.; Keller, C. K.; Kruger, C.

    2011-12-01

    Rising levels of reactive nitrogen (Nr) in the environment coupled with increasing population positions agriculture as a major contributor for supplying food and ecosystem services to the world. The concept of Precision Agroecology (PA) explicitly recognizes the importance of time and place by combining the principles of precision farming with ecology creating a framework that can lead to improvements in Nr use efficiency. In the Palouse region of the Pacific Northwest, USA, relationships between productivity, N dynamics and cycling, water availability, and environmental impacts result from intricate spatial and temporal variations in soil, ecosystem processes, and socioeconomic factors. Our research goal is to investigate N use efficiency (NUE) in the context of factors that regulate site-specific environmental and economic conditions and to develop the concept of PA for use in sustainable agroecosystems and science-based Nr policy. Nitrogen and plant density field trials with winter wheat (Triticum aestivum L.) were conducted at the Washington State University Cook Agronomy Farm near Pullman, WA under long-term no-tillage management in 2010 and 2011. Treatments were imposed across environmentally heterogeneous field conditions to assess soil, crop and environmental interactions. Microplots with a split N application using 15N-labeled fertilizer were established in 2011 to examine the impact of N timing on uptake of fertilizer and soil N throughout the growing season for two plant density treatments. Preliminary data show that plant density manipulation combined with precision N applications regulated water and N use and resulted in greater wheat yield with less seed and N inputs. These findings indicate that improvements to NUE and agroecosystem sustainability should consider landscape-scale patterns driving productivity (e.g., spatial and temporal dynamics of water availability and N transformations) and would benefit from policy incentives that promote a PA approach.

  9. Impact of orbit, clock and EOP errors in GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Hackman, C.

    2012-12-01

    Precise point positioning (PPP; [1]) has gained ever-increasing usage in GNSS carrier-phase positioning, navigation and timing (PNT) since its inception in the late 1990s. In this technique, high-precision satellite clocks, satellite ephemerides and earth-orientation parameters (EOPs) are applied as fixed input by the user in order to estimate receiver/location-specific quantities such as antenna coordinates, troposphere delay and receiver-clock corrections. This is in contrast to "network" solutions, in which (typically) less-precise satellite clocks, satellite ephemerides and EOPs are used as input, and in which these parameters are estimated simultaneously with the receiver/location-specific parameters. The primary reason for increased PPP application is that it offers most of the benefits of a network solution with a smaller computing cost. In addition, the software required to do PPP positioning can be simpler than that required for network solutions. Finally, PPP permits high-precision positioning of single or sparsely spaced receivers that may have few or no GNSS satellites in common view. A drawback of PPP is that the accuracy of the results depend directly on the accuracy of the supplied orbits, clocks and EOPs, since these parameters are not adjusted during the processing. In this study, we will examine the impact of orbit, EOP and satellite clock estimates on PPP solutions. Our primary focus will be the impact of these errors on station coordinates; however the study may be extended to error propagation into receiver-clock corrections and/or troposphere estimates if time permits. Study motivation: the United States Naval Observatory (USNO) began testing PPP processing using its own predicted orbits, clocks and EOPs in Summer 2012 [2]. The results of such processing could be useful for real- or near-real-time applications should they meet accuracy/precision requirements. Understanding how errors in satellite clocks, satellite orbits and EOPs propagate into PPP positioning and timing results allows researchers to focus their improvement efforts in areas most in need of attention. The initial study will be conducted using the simulation capabilities of Bernese GPS Software and extended to using real data if time permits. [1] J.F. Zumberge, M.B. Heflin, D.C. Jefferson, M.M. Watkins and F.H. Webb, Precise point positioning for the efficient and robust analysis of GPS data from large networks, J. Geophys. Res., 102(B3), 5005-5017, doi:10.1029/96JB03860, 1997. [2] C. Hackman, S.M. Byram, V.J. Slabinski and J.C. Tracey, Near-real-time and other high-precision GNSS-based orbit/clock/earth-orientation/troposphere parameters available from USNO, Proc. 2012 ION Joint Navigation Conference, 15 pp., in press, 2012.

  10. Comparative study of classification algorithms for damage classification in smart composite laminates

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Ryoo, Chang-Kyung; Kim, Heung Soo

    2017-04-01

    This paper presents a comparative study of different classification algorithms for the classification of various types of inter-ply delaminations in smart composite laminates. Improved layerwise theory is used to model delamination at different interfaces along the thickness and longitudinal directions of the smart composite laminate. The input-output data obtained through surface bonded piezoelectric sensor and actuator is analyzed by the system identification algorithm to get the system parameters. The identified parameters for the healthy and delaminated structure are supplied as input data to the classification algorithms. The classification algorithms considered in this study are ZeroR, Classification via regression, Naïve Bayes, Multilayer Perceptron, Sequential Minimal Optimization, Multiclass-Classifier, and Decision tree (J48). The open source software of Waikato Environment for Knowledge Analysis (WEKA) is used to evaluate the classification performance of the classifiers mentioned above via 75-25 holdout and leave-one-sample-out cross-validation regarding classification accuracy, precision, recall, kappa statistic and ROC Area.

  11. Multi-Target Angle Tracking Algorithm for Bistatic Multiple-Input Multiple-Output (MIMO) Radar Based on the Elements of the Covariance Matrix.

    PubMed

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-03-07

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.

  12. Multicore fibre photonic lanterns for precision radial velocity Science

    NASA Astrophysics Data System (ADS)

    Gris-Sánchez, Itandehui; Haynes, Dionne M.; Ehrlich, Katjana; Haynes, Roger; Birks, Tim A.

    2018-04-01

    Incomplete fibre scrambling and fibre modal noise can degrade high-precision spectroscopic applications (typically high spectral resolution and high signal to noise). For example, it can be the dominating error source for exoplanet finding spectrographs, limiting the maximum measurement precision possible with such facilities. This limitation is exacerbated in the next generation of infra-red based systems, as the number of modes supported by the fibre scales inversely with the wavelength squared and more modes typically equates to better scrambling. Substantial effort has been made by major research groups in this area to improve the fibre link performance by employing non-circular fibres, double scramblers, fibre shakers, and fibre stretchers. We present an original design of a multicore fibre (MCF) terminated with multimode photonic lantern ports. It is designed to act as a relay fibre with the coupling efficiency of a multimode fibre (MMF), modal stability similar to a single-mode fibre and low loss in a wide range of wavelengths (380 nm to 860 nm). It provides phase and amplitude scrambling to achieve a stable near field and far-field output illumination pattern despite input coupling variations, and low modal noise for increased stability for high signal-to-noise applications such as precision radial velocity (PRV) science. Preliminary results are presented for a 511-core MCF and compared with current state of the art octagonal fibre.

  13. The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills

    PubMed Central

    Tierney, Adam T.; Kraus, Nina

    2013-01-01

    Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117

  14. Neural timing signal for precise tactile timing judgments

    PubMed Central

    Watanabe, Junji; Nishida, Shin'ya

    2016-01-01

    The brain can precisely encode the temporal relationship between tactile inputs. While behavioural studies have demonstrated precise interfinger temporal judgments, the underlying neural mechanism remains unknown. Computationally, two kinds of neural responses can act as the information source. One is the phase-locked response to the phase of relatively slow inputs, and the other is the response to the amplitude change of relatively fast inputs. To isolate the contributions of these components, we measured performance of a synchrony judgment task for sine wave and amplitude-modulation (AM) wave stimuli. The sine wave stimulus was a low-frequency sinusoid, with the phase shifted in the asynchronous stimulus. The AM wave stimulus was a low-frequency sinusoidal AM of a 250-Hz carrier, with only the envelope shifted in the asynchronous stimulus. In the experiment, three stimulus pairs, two synchronous ones and one asynchronous one, were sequentially presented to neighboring fingers, and participants were asked to report which one was the asynchronous pair. We found that the asynchrony of AM waves could be detected as precisely as single impulse pair, with the threshold asynchrony being ∼20 ms. On the other hand, the asynchrony of sine waves could not be detected at all in the range from 5 to 30 Hz. Our results suggest that the timing signal for tactile judgments is provided not by the stimulus phase information but by the envelope of the response of the high-frequency-sensitive Pacini channel (PC), although they do not exclude a possible contribution of the envelope of non-PCs. PMID:26843600

  15. Optimal nonlinear codes for the perception of natural colours.

    PubMed

    von der Twer, T; MacLeod, D I

    2001-08-01

    We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.

  16. Addressing the targeting range of the ABILHAND-56 in relapsing-remitting multiple sclerosis: A mixed methods psychometric study.

    PubMed

    Cleanthous, Sophie; Strzok, Sara; Pompilus, Farrah; Cano, Stefan; Marquis, Patrick; Cohan, Stanley; Goldman, Myla D; Kresa-Reahl, Kiren; Petrillo, Jennifer; Castrillo-Viguera, Carmen; Cadavid, Diego; Chen, Shih-Yin

    2018-01-01

    ABILHAND, a manual ability patient-reported outcome instrument originally developed for stroke patients, has been used in multiple sclerosis clinical trials; however, psychometric analyses indicated the measure's limited measurement range and precision in higher-functioning multiple sclerosis patients. The purpose of this study was to identify candidate items to expand the measurement range of the ABILHAND-56, thus improving its ability to detect differences in manual ability in higher-functioning multiple sclerosis patients. A step-wise mixed methods design strategy was used, comprising two waves of patient interviews, a combination of qualitative (concept elicitation and cognitive debriefing) and quantitative (Rasch measurement theory) analytic techniques, and consultation interviews with three clinical neurologists specializing in multiple sclerosis. Original ABILHAND was well understood in this context of use. Eighty-two new manual ability concepts were identified. Draft supplementary items were generated and refined with patient and neurologist input. Rasch measurement theory psychometric analysis indicated supplementary items improved targeting to higher-functioning multiple sclerosis patients and measurement precision. The final pool of Early Multiple Sclerosis Manual Ability items comprises 20 items. The synthesis of qualitative and quantitative methods used in this study improves the ABILHAND content validity to more effectively identify manual ability changes in early multiple sclerosis and potentially help determine treatment effect in higher-functioning patients in clinical trials.

  17. Salient man-made structure detection in infrared images

    NASA Astrophysics Data System (ADS)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  18. Using computer simulations to determine the limitations of dynamic clamp stimuli applied at the soma in mimicking distributed conductance sources.

    PubMed

    Lin, Risa J; Jaeger, Dieter

    2011-05-01

    In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.

  19. AN EFFICIENT, COMPACT, AND VERSATILE FIBER DOUBLE SCRAMBLER FOR HIGH PRECISION RADIAL VELOCITY INSTRUMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halverson, Samuel; Roy, Arpita; Mahadevan, Suvrath

    2015-06-10

    We present the design and test results of a compact optical fiber double-scrambler for high-resolution Doppler radial velocity instruments. This device consists of a single optic: a high-index n ∼ 2 ball lens that exchanges the near and far fields between two fibers. When used in conjunction with octagonal fibers, this device yields very high scrambling gains (SGs) and greatly desensitizes the fiber output from any input illumination variations, thereby stabilizing the instrument profile of the spectrograph and improving the Doppler measurement precision. The system is also highly insensitive to input pupil variations, isolating the spectrograph from telescope illumination variationsmore » and seeing changes. By selecting the appropriate glass and lens diameter the highest efficiency is achieved when the fibers are practically in contact with the lens surface, greatly simplifying the alignment process when compared to classical double-scrambler systems. This prototype double-scrambler has demonstrated significant performance gains over previous systems, achieving SGs in excess of 10,000 with a throughput of ∼87% using uncoated Polymicro octagonal fibers. Adding a circular fiber to the fiber train further increases the SG to >20,000, limited by laboratory measurement error. While this fiber system is designed for the Habitable-zone Planet Finder spectrograph, it is more generally applicable to other instruments in the visible and near-infrared. Given the simplicity and low cost, this fiber scrambler could also easily be multiplexed for large multi-object instruments.« less

  20. A new experimental procedure of outgassing rate measurement to obtain more precise deposition properties of materials

    NASA Astrophysics Data System (ADS)

    Miyazaki, Eiji; Shimazaki, Kazunori; Numata, Osamu; Waki, Miyuki; Yamanaka, Riyo; Kimoto, Yugo

    2016-09-01

    Outgassing rate measurement, or dynamic outgassing test, is used to obtain outgassing properties of materials, i.e., Total Mass Loss, "TML," and Collected Volatile Condensed Mass, "CVCM." The properties are used as input parameters for executing contamination analysis, e.g., calculating a prediction of deposition mass on a surface in a spacecraft caused by outgassed substances from contaminant sources onboard. It is likely that results obtained by such calculations are affected by the input parameters. Thus, it is important to get a sufficient experimental data set of outgassing rate measurements for extract good outgassing parameters of materials for calculation. As specified in the standard, ASTM E 1559, TML is measured by a QCM sensor kept at cryogenic temperature; CVCMs are measured at certain temperatures. In the present work, the authors propose a new experimental procedure to obtain more precise VCMs from one run of the current test time with the present equipment. That is, two of four CQCMs in the equipment control the temperature to cool step-by-step during the test run. It is expected that the deposition rate, that is sticking coefficient, with respect to temperature could be discovered. As a result, the sticking coefficient can be obtained directly between -50 and 50 degrees C with 5 degrees C step. It looks like the method could be used as an improved procedure for outgassing rate measurement. The present experiment also specified some issues of the new procedure. It will be considered in future work.

  1. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  2. Locating waterfowl observations on aerial surveys

    USGS Publications Warehouse

    Butler, W.I.; Hodges, J.I.; Stehn, R.A.

    1995-01-01

    We modified standard aerial survey data collection to obtain the geographic location for each waterfowl observation on surveys in Alaska during 1987-1993. Using transect navigation with CPS (global positioning system), data recording on continuously running tapes, and a computer data input program, we located observations with an average deviation along transects of 214 m. The method provided flexibility in survey design and data analysis. Although developed for geese nesting near the coast of the Yukon-Kuskokwim Delta, the methods are widely applicable and were used on other waterfowl surveys in Alaska to map distribution and relative abundance of waterfowl. Accurate location data with GIS analysis and display may improve precision and usefulness of data from any aerial transect survey.

  3. Automated generation of individually customized visualizations of diagnosis-specific medical information using novel techniques of information extraction

    NASA Astrophysics Data System (ADS)

    Chen, Andrew A.; Meng, Frank; Morioka, Craig A.; Churchill, Bernard M.; Kangarloo, Hooshang

    2005-04-01

    Managing pediatric patients with neurogenic bladder (NGB) involves regular laboratory, imaging, and physiologic testing. Using input from domain experts and current literature, we identified specific data points from these tests to develop the concept of an electronic disease vector for NGB. An information extraction engine was used to extract the desired data elements from free-text and semi-structured documents retrieved from the patient"s medical record. Finally, a Java-based presentation engine created graphical visualizations of the extracted data. After precision, recall, and timing evaluation, we conclude that these tools may enable clinically useful, automatically generated, and diagnosis-specific visualizations of patient data, potentially improving compliance and ultimately, outcomes.

  4. Hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2013-02-01

    The HARP and NA61/SHINE hadroproduction experiments as well as their implications for neutrino physics are discussed. HARP measurements have already been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve the atmospheric neutrino flux predictions and to help in the optimization of neutrino factory and super-beam designs. First measurements released recently by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment. Both HARP and NA61/SHINE experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  5. Pilot Human Factors in Stall/Spin Accidents of Supersonic Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Anderson, S. B.; Enevoldson, E. K.; Nguyen, L. T.

    1983-01-01

    A study has been made of pilot human factors related to stall/spin accidents of supersonic fighter aircraft. The military specifications for flight at high angles of attack are examined. Several pilot human factors problems related to stall/spin are discussed. These problems include (1) unsatisfactory nonvisual warning cues; (2) the inability of the pilot to quickly determine if the aircraft is spinning out of control, or to recognize the type of spin; (3) the inability of the pilot to decide on and implement the correct spin recovery technique; (4) the inability of the pilot to move, caused by high angular rotation; and (5) the tendency of pilots to wait too long in deciding to abandon the irrecoverable aircraft. Psycho-physiological phenomena influencing pilot's behavior in stall/spin situations include (1) channelization of sensory inputs, (2) limitations in precisely controlling several muscular inputs, (3) inaccurate judgment of elapsed time, and (4) disorientation of vestibulo-ocular inputs. Results are given of pilot responses to all these problems in the F14A, F16/AB, and F/A-18A aircraft. The use of departure spin resistance and automatic spin prevention systems incorporated on recent supersonic fighters are discussed. These systems should help to improve the stall/spin accident record with some compromise in maneuverability.

  6. Octopus Cells in the Posteroventral Cochlear Nucleus Provide the Main Excitatory Input to the Superior Paraolivary Nucleus

    PubMed Central

    Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.

    2017-01-01

    Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283

  7. Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.

    PubMed

    Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid

    2015-12-01

    Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.

  8. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  9. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.

    PubMed

    Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-06-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.

  10. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  11. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  12. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    NASA Astrophysics Data System (ADS)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  13. Generating synchrony from the asynchronous: compensation for cochlear traveling wave delays by the dendrites of individual brainstem neurons

    PubMed Central

    McGinley, Matthew J.; Liberman, M. Charles; Bal, Ramazan; Oertel, Donata

    2012-01-01

    Broadband transient sounds, such as clicks and consonants, activate a traveling wave in the cochlea. This wave evokes firing in auditory nerve fibers that are tuned to high frequencies several milliseconds earlier than in fibers tuned to low frequencies. Despite this substantial traveling wave delay, octopus cells in the brainstem receive broadband input and respond to clicks with submillisecond temporal precision. The dendrites of octopus cells lie perpendicular to the tonotopically organized array of auditory nerve fibers, placing the earliest arriving inputs most distally and the latest arriving closest to the soma. Here, we test the hypothesis that the topographic arrangement of synaptic inputs on dendrites of octopus cells allows octopus cells to compensate the traveling wave delay. We show that in mice the full cochlear traveling wave delay is 1.6 ms. Because the dendrites of each octopus cell spread across about one third of the tonotopic axis, a click evokes a soma directed sweep of synaptic input lasting 0.5 ms in individual octopus cells. Morphologically and biophysically realistic, computational models of octopus cells show that soma-directed sweeps with durations matching in vivo measurements result in the largest and sharpest somatic excitatory postsynaptic potentials (EPSPs). A low input resistance and activation of a low-voltage-activated potassium conductance that are characteristic of octopus cells are important determinants of sweep sensitivity. We conclude that octopus cells have dendritic morphologies and biophysics tailored to accomplish the precise encoding of broadband transient sounds. PMID:22764237

  14. Phase estimation of coherent states with a noiseless linear amplifier

    NASA Astrophysics Data System (ADS)

    Assad, Syed M.; Bradshaw, Mark; Lam, Ping Koy

    Amplification of quantum states is inevitably accompanied with the introduction of noise at the output. For protocols that are probabilistic with heralded success, noiseless linear amplification in theory may still be possible. When the protocol is successful, it can lead to an output that is a noiselessly amplified copy of the input. When the protocol is unsuccessful, the output state is degraded and is usually discarded. Probabilistic protocols may improve the performance of some quantum information protocols, but not for metrology if the whole statistics is taken into consideration. We calculate the precision limits on estimating the phase of coherent states using a noiseless linear amplifier by computing its quantum Fisher information and we show that on average, the noiseless linear amplifier does not improve the phase estimate. We also discuss the case where abstention from measurement can reduce the cost for estimation.

  15. AMCP Partnership Forum: Managing Care in the Wave of Precision Medicine.

    PubMed

    2018-05-23

    Precision medicine, the customization of health care to an individual's genetic profile while accounting for biomarkers and lifestyle, has increasingly been adopted by health care stakeholders to guide the development of treatment options, improve treatment decision making, provide more patient-centered care, and better inform coverage and reimbursement decisions. Despite these benefits, key challenges prevent its broader use and adoption. On December 7-8, 2017, the Academy of Managed Care Pharmacy convened a group of stakeholders to discuss these challenges and provide recommendations to facilitate broader adoption and use of precision medicine across health care settings. These stakeholders represented the pharmaceutical industry, clinicians, patient advocacy, private payers, device manufacturers, health analytics, information technology, academia, and government agencies. Throughout the 2-day forum, participants discussed evidence requirements for precision medicine, including consistent ways to measure the utility and validity of precision medicine tests and therapies, limitations of traditional clinical trial designs, and limitations of value assessment framework methods. They also highlighted the challenges with evidence collection and data silos in precision medicine. Interoperability within and across health systems is hindering clinical advancements. Current medical coding systems also cannot account for the heterogeneity of many diseases, preventing health systems from having a complete understanding of their patient population to inform resource allocation. Challenges faced by payers, such as evidence limitations, to inform coverage and reimbursement decisions in precision medicine, as well as legal and regulatory barriers that inhibit more widespread data sharing, were also identified. While a broad range of perspectives was shared throughout the forum, participants reached consensus across 2 overarching areas. First, there is a greater need for common definitions, thresholds, and standards to guide evidence generation in precision medicine. Second, current information silos are preventing the sharing of valuable data. Collaboration among stakeholders is needed to support better information sharing, awareness, and education of precision medicine for patients. The recommendations brought forward by this diverse group of experts provide a set of solutions to spur widespread use and application of precision medicine. Taken together, successful adoption and use of precision medicine will require input and collaboration from all sectors of health care, especially patients. DISCLOSURES This AMCP Partnership Forum and the development of the proceedings document were supported by Amgen, Foundation Medicine, Genentech, Gilead, MedImpact, National Pharmaceutical Council, Precision for Value, Sanofi, Takeda, and Xcenda.

  16. Single Mode, Extreme Precision Doppler Spectrographs

    NASA Astrophysics Data System (ADS)

    Schwab, Christian; Leon-Saval, Sergio G.; Betters, Christopher H.; Bland-Hawthorn, Joss; Mahadevan, Suvrath

    2014-04-01

    The `holy grail' of exoplanet research today is the detection of an earth-like planet: a rocky planet in the habitable zone around a main-sequence star. Extremely precise Doppler spectroscopy is an indispensable tool to find and characterize earth-like planets; however, to find these planets around solar-type stars, we need nearly one order of magnitude better radial velocity (RV) precision than the best current spectrographs provide. Recent developments in astrophotonics (Bland-Hawthorn & Horton 2006, Bland-Hawthorn et al. 2010) and adaptive optics (AO) enable single mode fiber (SMF) fed, high resolution spectrographs, which can realize the next step in precision. SMF feeds have intrinsic advantages over multimode fiber or slit coupled spectrographs: The intensity distribution at the fiber exit is extremely stable, and as a result the line spread function of a well-designed spectrograph is fully decoupled from input coupling conditions, like guiding or seeing variations (Ihle et al. 2010). Modal noise, a limiting factor in current multimode fiber fed instruments (Baudrand & Walker 2001), can be eliminated by proper design, and the diffraction limited input to the spectrograph allows for very compact instrument designs, which provide excellent optomechanical stability. A SMF is the ideal interface for new, very precise wavelength calibrators, like laser frequency combs (Steinmetz et al. 2008, Osterman et al. 2012), or SMF based Fabry-Perot Etalons (Halverson et al. 2013). At near infrared wavelengths, these technologies are ready to be implemented in on-sky instruments, or already in use. We discuss a novel concept for such a spectrograph.

  17. Cerebellum - function (image)

    MedlinePlus

    The cerebellum processes input from other areas of the brain, spinal cord and sensory receptors to provide precise timing ... the skeletal muscular system. A stroke affecting the cerebellum may cause dizziness, nausea, balance and coordination problems.

  18. Actuator Characterization of Man Portable Precision Maneuver Concepts

    DTIC Science & Technology

    2014-03-01

    brushless DC motors, along with a model of the rotating wing concept and a prototype 40-mm projectile, which was fired through the spark range (14), is... Brushless Electronic Speed Controller) was used to drive the three motor commutator input lines. This controller inputs a pulse-width modulated (PWM...Part II: The Brushless D.C. Motor Drive. IEEE Transactions on Industry Applications 1989, 25 (2), 274–279. 16. Hemati, N.; Leu, M. A Complete

  19. CLARO: an ASIC for high rate single photon counting with multi-anode photomultipliers

    NASA Astrophysics Data System (ADS)

    Baszczyk, M.; Carniti, P.; Cassina, L.; Cotta Ramusino, A.; Dorosz, P.; Fiorini, M.; Gotti, C.; Kucewicz, W.; Malaguti, R.; Pessina, G.

    2017-08-01

    The CLARO is a radiation-hard 8-channel ASIC designed for single photon counting with multi-anode photomultiplier tubes. Each channel outputs a digital pulse when the input signal from the photomultiplier crosses a configurable threshold. The fast return to baseline, typically within 25 ns, and below 50 ns in all conditions, allows to count up to 107 hits/s on each channel, with a power consumption of about 1 mW per channel. The ASIC presented here is a much improved version of the first 4-channel prototype. The threshold can be precisely set in a wide range, between 30 ke- (5 fC) and 16 Me- (2.6 pC). The noise of the amplifier with a 10 pF input capacitance is 3.5 ke- (0.6 fC) RMS. All settings are stored in a 128-bit configuration and status register, protected against soft errors with triple modular redundancy. The paper describes the design of the ASIC at transistor-level, and demonstrates its performance on the test bench.

  20. Faraday rotation data analysis with least-squares elliptical fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Adam D.; McHale, G. Brent; Goerz, David A.

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the methodmore » is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.« less

  1. Non-Destructive Detection of Wire Rope Discontinuities from Residual Magnetic Field Images Using the Hilbert-Huang Transform and Compressed Sensing

    PubMed Central

    Zhang, Juwei; Tan, Xiaojiang; Zheng, Pengbo

    2017-01-01

    Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF) of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance. PMID:28300790

  2. Compensation for time delay in flight simulator visual-display systems

    NASA Technical Reports Server (NTRS)

    Crane, D. F.

    1983-01-01

    A piloted aircraft can be viewed as a closed-loop, man-machine control system. When a simulator pilot is performing a precision maneuver, a delay in the visual display of aircraft response to pilot-control input decreases the stability of the pilot-aircraft system. The less stable system is more difficult to control precisely. Pilot dynamic response and performance change as the pilot attempts to compensate for the decrease in system stability, and these changes bias the simulation results by influencing the pilot's rating of the handling qualities of the simulated aircraft. Delay compensation, designed to restore pilot-aircraft system stability, was evaluated in several studies which are reported here. The studies range from single-axis, tracking-task experiments (with sufficient subjects and trials to establish statistical significance of the results) to a brief evaluation of compensation of a computer-generated-imagery (CGI) visual display system in a full six-degree-of-freedom simulation. The compensation was effective - improvements in pilot performance and workload or aircraft handling-qualities rating (HQR) were observed. Results from recent aircraft handling-qualities research literature which support the compensation design approach are also reviewed.

  3. Ultra-precision fabrication of 500 mm long and laterally graded Ru/C multilayer mirrors for X-ray light sources.

    PubMed

    Störmer, M; Gabrisch, H; Horstmann, C; Heidorn, U; Hertlein, F; Wiesmann, J; Siewert, F; Rack, A

    2016-05-01

    X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity at higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.

  4. Ultra-precision fabrication of 500 mm long and laterally graded Ru/C multilayer mirrors for X-ray light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Störmer, M., E-mail: michael.stoermer@hzg.de; Gabrisch, H.; Horstmann, C.

    2016-05-15

    X-ray mirrors are needed for beam shaping and monochromatization at advanced research light sources, for instance, free-electron lasers and synchrotron sources. Such mirrors consist of a substrate and a coating. The shape accuracy of the substrate and the layer precision of the coating are the crucial parameters that determine the beam properties required for various applications. In principal, the selection of the layer materials determines the mirror reflectivity. A single layer mirror offers high reflectivity in the range of total external reflection, whereas the reflectivity is reduced considerably above the critical angle. A periodic multilayer can enhance the reflectivity atmore » higher angles due to Bragg reflection. Here, the selection of a suitable combination of layer materials is essential to achieve a high flux at distinct photon energies, which is often required for applications such as microtomography, diffraction, or protein crystallography. This contribution presents the current development of a Ru/C multilayer mirror prepared by magnetron sputtering with a sputtering facility that was designed in-house at the Helmholtz-Zentrum Geesthacht. The deposition conditions were optimized in order to achieve ultra-high precision and high flux in future mirrors. Input for the improved deposition parameters came from investigations by transmission electron microscopy. The X-ray optical properties were investigated by means of X-ray reflectometry using Cu- and Mo-radiation. The change of the multilayer d-spacing over the mirror dimensions and the variation of the Bragg angles were determined. The results demonstrate the ability to precisely control the variation in thickness over the whole mirror length of 500 mm thus achieving picometer-precision in the meter-range.« less

  5. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. A parallel input composite transimpedance amplifier.

    PubMed

    Kim, D J; Kim, C

    2018-01-01

    A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.

  7. A parallel input composite transimpedance amplifier

    NASA Astrophysics Data System (ADS)

    Kim, D. J.; Kim, C.

    2018-01-01

    A new approach to high performance current to voltage preamplifier design is presented. The design using multiple operational amplifiers (op-amps) has a parasitic capacitance compensation network and a composite amplifier topology for fast, precision, and low noise performance. The input stage consisting of a parallel linked JFET op-amps and a high-speed bipolar junction transistor (BJT) gain stage driving the output in the composite amplifier topology, cooperating with the capacitance compensation feedback network, ensures wide bandwidth stability in the presence of input capacitance above 40 nF. The design is ideal for any two-probe measurement, including high impedance transport and scanning tunneling microscopy measurements.

  8. Solar models with helium and heavy-element diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahcall, J.N.; Pinsonneault, M.H.; Wasserburg, G.J.

    1995-10-01

    Helium and heavy-element diffusion are both included in precise calculations of solar models. In addition, improvements in the input data for solar interior models are described for nuclear reaction rates, the solar luminosity, the solar age, heavy-element abundances, radiative opacities, helium and metal diffusion rates, and neutrino interaction cross sections. The effects on the neutrino fluxes of each change in the input physics are evaluated separately by constructing a series of solar models with one additional improvement added at each stage. The effective 1{sigma} uncertainties in the individual input quantities are estimated and used to evaluate the uncertainties in themore » calculated neutrino fluxes and the calculated event rates for solar neutrino experiments. The calculated neutrino event rates, including all of the improvements, are 9.3{sub {minus}1.4}{sup +1.2} SNU for the {sup 37}Cl experiment and 137{sub {minus}7}{sup +8} SNU for the {sup 71}Ga experiments. The calculated flux of {sup 7}Be neutrinos is 5.1(1.00{sub {minus}0.07}{sup +0.06}){times}10{sup 9} cm{sup {minus}2}s{sup {minus}1} and the flux of {sup 8}B neutrinos is 6.6(1.00{sub {minus}0.17}{sup +0.14}){times}10{sup 6} cm{sup {minus}2}s{sup {minus}1}. The primordial helium abundance found for this model is {ital Y}=0.278. The present-day surface abundance of the model is {ital Y}{sub {ital s}}=0.247, in agreement with the helioseismological measurement of {ital Y}{sub {ital s}}=0.242{plus_minus}0.003 determined by Hernandez and Christensen-Dalsgaard (1994). The computed depth of the convective zone is {ital R}=0.712{ital R}{sub {circle_dot}}, in agreement with the observed value determined from {ital p}-mode oscillation data of {ital R}=0.713{plus_minus}0.003{ital R}{sub {circle_dot}} found by Christensen-Dalsgaard {ital et} {ital al}. (1991). (Abstract Truncated)« less

  9. Sequential dynamics in visual short-term memory.

    PubMed

    Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B

    2014-10-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.

  10. Sequential dynamics in visual short-term memory

    PubMed Central

    Conway, Andrew R. A.; Turk-Browne, Nicholas B.

    2014-01-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092

  11. The impact of 14nm photomask variability and uncertainty on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-09-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.

  12. Dopaminergic Contributions to Vocal Learning

    PubMed Central

    Hoffmann, Lukas A.; Saravanan, Varun; Wood, Alynda N.; He, Li

    2016-01-01

    Although the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms. SIGNIFICANCE STATEMENT During skill learning, the brain relies on sensory feedback to improve motor performance. However, the neural basis of sensorimotor learning is poorly understood. Here, we investigate the role of the neurotransmitter dopamine in regulating vocal learning in the Bengalese finch, a songbird with an extremely precise singing behavior that can nevertheless be reshaped dramatically by auditory feedback. Our findings show that reduction of dopamine inputs to a region of the songbird basal ganglia greatly impairs vocal learning but has no detectable effect on vocal performance. These results suggest a specific role for dopamine in regulating vocal plasticity. PMID:26888928

  13. Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; L'Esperance, A.

    2017-01-01

    A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.

  14. A disturbance observer-based adaptive control approach for flexure beam nano manipulators.

    PubMed

    Zhang, Yangming; Yan, Peng; Zhang, Zhen

    2016-01-01

    This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  16. Testing Models of Stellar Structure and Evolution I. Comparison with Detached Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    del Burgo, C.; Allende Prieto, C.

    2018-05-01

    We present the results of an analysis aimed at testing the accuracy and precision of the PARSEC v1.2S library of stellar evolution models, combined with a Bayesian approach, to infer stellar parameters. We mainly employ the online DEBCat catalogue by Southworth, a compilation of detached eclipsing binary systems with published measurements of masses and radii to ˜ 2 per cent precision. We select a sample of 318 binary components, with masses between 0.10 and 14.5 solar units, and distances between 1.3 pc and ˜ 8 kpc for Galactic objects and ˜ 44-68 kpc for the extragalactic ones. The Bayesian analysis applied takes on input effective temperature, radius, and [Fe/H], and their uncertainties, returning theoretical predictions for other stellar parameters. From the comparison with dynamical masses, we conclude inferred masses are precisely derived for stars on the main-sequence and in the core-helium-burning phase, with respective uncertainties of 4 per cent and 7 per cent, on average. Subgiants and red giants masses are predicted within 14 per cent, and early asymptotic giant branch stars within 24 per cent. These results are helpful to further improve the models, in particular for advanced evolutionary stages for which our understanding is limited. We obtain distances and ages for the binary systems and compare them, whenever possible, with precise literature estimates, finding excellent agreement. We discuss evolutionary effects and the challenges associated with the inference of stellar ages from evolutionary models. We also provide useful polynomial fittings to theoretical zero-age main-sequence relations.

  17. Assessing and modelling ecohydrologic processes at the agricultural field scale

    NASA Astrophysics Data System (ADS)

    Basso, Bruno

    2015-04-01

    One of the primary goals of agricultural management is to increase the amount of crop produced per unit of fertilizer and water used. World record corn yields demonstrated that water use efficiency can increase fourfold with improved agronomic management and cultivars able to tolerate high densities. Planting crops with higher plant density can lead to significant yield increases, and increase plant transpiration vs. soil water evaporation. Precision agriculture technologies have been adopted for the last twenty years but seldom have the data collected been converted to information that led farmers to different agronomic management. These methods are intuitively appealing, but yield maps and other spatial layers of data need to be properly analyzed and interpreted to truly become valuable. Current agro-mechanic and geospatial technologies allow us to implement a spatially variable plan for agronomic inputs including seeding rate, cultivars, pesticides, herbicides, fertilizers, and water. Crop models are valuable tools to evaluate the impact of management strategies (e.g., cover crops, tile drains, and genetically-improved cultivars) on yield, soil carbon sequestration, leaching and greenhouse gas emissions. They can help farmers identify adaptation strategies to current and future climate conditions. In this paper I illustrate the key role that precision agriculture technologies (yield mapping technologies, within season soil and crop sensing), crop modeling and weather can play in dealing with the impact of climate variability on soil ecohydrologic processes. Case studies are presented to illustrate this concept.

  18. Can Bayesian Theories of Autism Spectrum Disorder Help Improve Clinical Practice?

    PubMed

    Haker, Helene; Schneebeli, Maya; Stephan, Klaas Enno

    2016-01-01

    Diagnosis and individualized treatment of autism spectrum disorder (ASD) represent major problems for contemporary psychiatry. Tackling these problems requires guidance by a pathophysiological theory. In this paper, we consider recent theories that re-conceptualize ASD from a "Bayesian brain" perspective, which posit that the core abnormality of ASD resides in perceptual aberrations due to a disbalance in the precision of prediction errors (sensory noise) relative to the precision of predictions (prior beliefs). This results in percepts that are dominated by sensory inputs and less guided by top-down regularization and shifts the perceptual focus to detailed aspects of the environment with difficulties in extracting meaning. While these Bayesian theories have inspired ongoing empirical studies, their clinical implications have not yet been carved out. Here, we consider how this Bayesian perspective on disease mechanisms in ASD might contribute to improving clinical care for affected individuals. Specifically, we describe a computational strategy, based on generative (e.g., hierarchical Bayesian) models of behavioral and functional neuroimaging data, for establishing diagnostic tests. These tests could provide estimates of specific cognitive processes underlying ASD and delineate pathophysiological mechanisms with concrete treatment targets. Written with a clinical audience in mind, this article outlines how the development of computational diagnostics applicable to behavioral and functional neuroimaging data in routine clinical practice could not only fundamentally alter our concept of ASD but eventually also transform the clinical management of this disorder.

  19. Can Bayesian Theories of Autism Spectrum Disorder Help Improve Clinical Practice?

    PubMed Central

    Haker, Helene; Schneebeli, Maya; Stephan, Klaas Enno

    2016-01-01

    Diagnosis and individualized treatment of autism spectrum disorder (ASD) represent major problems for contemporary psychiatry. Tackling these problems requires guidance by a pathophysiological theory. In this paper, we consider recent theories that re-conceptualize ASD from a “Bayesian brain” perspective, which posit that the core abnormality of ASD resides in perceptual aberrations due to a disbalance in the precision of prediction errors (sensory noise) relative to the precision of predictions (prior beliefs). This results in percepts that are dominated by sensory inputs and less guided by top-down regularization and shifts the perceptual focus to detailed aspects of the environment with difficulties in extracting meaning. While these Bayesian theories have inspired ongoing empirical studies, their clinical implications have not yet been carved out. Here, we consider how this Bayesian perspective on disease mechanisms in ASD might contribute to improving clinical care for affected individuals. Specifically, we describe a computational strategy, based on generative (e.g., hierarchical Bayesian) models of behavioral and functional neuroimaging data, for establishing diagnostic tests. These tests could provide estimates of specific cognitive processes underlying ASD and delineate pathophysiological mechanisms with concrete treatment targets. Written with a clinical audience in mind, this article outlines how the development of computational diagnostics applicable to behavioral and functional neuroimaging data in routine clinical practice could not only fundamentally alter our concept of ASD but eventually also transform the clinical management of this disorder. PMID:27378955

  20. Validating precision estimates in horizontal wind measurements from a Doppler lidar

    DOE PAGES

    Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...

    2017-03-30

    Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less

  1. Design and analysis of a 3D Elliptical Micro-Displacement Motion Stage

    NASA Astrophysics Data System (ADS)

    Lin, Jieqiong; Zhao, Dongpo; Lu, Mingming; Zhou, Jiakang

    2017-12-01

    Micro-displacement motion stage driven by piezoelectric actuator has a significant demand in the field of ultra-precision machining in recent years, while the design of micro-displacement motion stage plays an important role to realize a large displacement output and high precision control. Thus, a 3D elliptical micro-displacement motion stage driven by three PZT actuators has been developed. Firstly, the 3D elliptical trajectory of this motion stage could be adjusted through the form of the PZT actuators input signal. Then, the desired trajectory was obtained by adjusting the micro displacement of the motion stage in 3D elliptical space. Finally, the trajectory simulation and the finite element simulation were applied in this motion stage. The experimental results shown that, the output displacement of the three directions under the input force of the 1600N were 14μm, 16μm and 74μm, respectively. And the first three modes were 1471.6Hz, 2698.4Hz and 2803.4Hz, respectively. Analysis and experiments were carried out to verify the performance, result proved that a large output displacement and high precision control could be obtained.

  2. Increased reliability of nuclear magnetic resonance protein structures by consensus structure bundles.

    PubMed

    Buchner, Lena; Güntert, Peter

    2015-02-03

    Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. An Electromagnetically-Controlled Precision Orbital Tracking Vehicle (POTV)

    DTIC Science & Technology

    1992-12-01

    assume that C > B > A. Then 0 1(t) is purely sinusoidal. tk2 (t) is also sinusoidal because the forcing function z(t) is sinusoidal. 03 (t) is more...an unpredictable -manner. The problem arises from the rank deficiency of the G input matrix as shown below. Remember we have shown already that its...rank can never exceed five because rows two, four, and six are linearly dependent. The rank deficiency arises from the "translational part" of the input

  4. Nonlinear computations shaping temporal processing of precortical vision.

    PubMed

    Butts, Daniel A; Cui, Yuwei; Casti, Alexander R R

    2016-09-01

    Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute on the basis of recordings from single areas. In the current article, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower threshold. To determine whether additional influences shaped the response at the level of the LGN, we extended this model to use the S-potential input in combination with stimulus-driven terms to predict the LGN response. We found that the S-potential input "explained away" the major excitatory and delayed suppressive terms responsible for temporal patterning of LGN spike trains but revealed additional contributions, largely PULL suppression, to the LGN response. Using this novel combination of recordings and modeling, we were thus able to dissect multiple circuit contributions to LGN temporal responses across retina and LGN, and set the foundation for targeted study of each stage. Copyright © 2016 the American Physiological Society.

  5. Design, Modeling and Performance Optimization of a Novel Rotary Piezoelectric Motor

    NASA Technical Reports Server (NTRS)

    Duong, Khanh A.; Garcia, Ephrahim

    1997-01-01

    This work has demonstrated a proof of concept for a torsional inchworm type motor. The prototype motor has shown that piezoelectric stack actuators can be used for rotary inchworm motor. The discrete linear motion of piezoelectric stacks can be converted into rotary stepping motion. The stacks with its high force and displacement output are suitable actuators for use in piezoelectric motor. The designed motor is capable of delivering high torque and speed. Critical issues involving the design and operation of piezoelectric motors were studied. The tolerance between the contact shoes and the rotor has proved to be very critical to the performance of the motor. Based on the prototype motor, a waveform optimization scheme was proposed and implemented to improve the performance of the motor. The motor was successfully modeled in MATLAB. The model closely represents the behavior of the prototype motor. Using the motor model, the input waveforms were successfully optimized to improve the performance of the motor in term of speed, torque, power and precision. These optimized waveforms drastically improve the speed of the motor at different frequencies and loading conditions experimentally. The optimized waveforms also increase the level of precision of the motor. The use of the optimized waveform is a break-away from the traditional use of sinusoidal and square waves as the driving signals. This waveform optimization scheme can be applied to any inchworm motors to improve their performance. The prototype motor in this dissertation as a proof of concept was designed to be robust and large. Future motor can be designed much smaller and more efficient with lessons learned from the prototype motor.

  6. Optimally Repeatable Kinetic Model Variant for Myocardial Blood Flow Measurements with 82Rb PET.

    PubMed

    Ocneanu, Adrian F; deKemp, Robert A; Renaud, Jennifer M; Adler, Andy; Beanlands, Rob S B; Klein, Ran

    2017-01-01

    Purpose. Myocardial blood flow (MBF) quantification with 82 Rb positron emission tomography (PET) is gaining clinical adoption, but improvements in precision are desired. This study aims to identify analysis variants producing the most repeatable MBF measures. Methods. 12 volunteers underwent same-day test-retest rest and dipyridamole stress imaging with dynamic 82 Rb PET, from which MBF was quantified using 1-tissue-compartment kinetic model variants: (1) blood-pool versus uptake region sampled input function (Blood/Uptake-ROI), (2) dual spillover correction (SOC-On/Off), (3) right blood correction (RBC-On/Off), (4) arterial blood transit delay (Delay-On/Off), and (5) distribution volume (DV) constraint (Global/Regional-DV). Repeatability of MBF, stress/rest myocardial flow reserve (MFR), and stress/rest MBF difference (ΔMBF) was assessed using nonparametric reproducibility coefficients (RPC np = 1.45 × interquartile range). Results. MBF using SOC-On, RVBC-Off, Blood-ROI, Global-DV, and Delay-Off was most repeatable for combined rest and stress: RPC np = 0.21 mL/min/g (15.8%). Corresponding MFR and ΔMBF RPC np were 0.42 (20.2%) and 0.24 mL/min/g (23.5%). MBF repeatability improved with SOC-On at stress ( p < 0.001) and tended to improve with RBC-Off at both rest and stress ( p < 0.08). DV and ROI did not significantly influence repeatability. The Delay-On model was overdetermined and did not reliably converge. Conclusion. MBF and MFR test-retest repeatability were the best with dual spillover correction, left atrium blood input function, and global DV.

  7. Electrotactile EMG feedback improves the control of prosthesis grasping force

    NASA Astrophysics Data System (ADS)

    Schweisfurth, Meike A.; Markovic, Marko; Dosen, Strahinja; Teich, Florian; Graimann, Bernhard; Farina, Dario

    2016-10-01

    Objective. A drawback of active prostheses is that they detach the subject from the produced forces, thereby preventing direct mechanical feedback. This can be compensated by providing somatosensory feedback to the user through mechanical or electrical stimulation, which in turn may improve the utility, sense of embodiment, and thereby increase the acceptance rate. Approach. In this study, we compared a novel approach to closing the loop, namely EMG feedback (emgFB), to classic force feedback (forceFB), using electrotactile interface in a realistic task setup. Eleven intact-bodied subjects and one transradial amputee performed a routine grasping task while receiving emgFB or forceFB. The two feedback types were delivered through the same electrotactile interface, using a mixed spatial/frequency coding to transmit 8 discrete levels of the feedback variable. In emgFB, the stimulation transmitted the amplitude of the processed myoelectric signal generated by the subject (prosthesis input), and in forceFB the generated grasping force (prosthesis output). The task comprised 150 trials of routine grasping at six forces, randomly presented in blocks of five trials (same force). Interquartile range and changes in the absolute error (AE) distribution (magnitude and dispersion) with respect to the target level were used to assess precision and overall performance, respectively. Main results. Relative to forceFB, emgFB significantly improved the precision of myoelectric commands (min/max of the significant levels) for 23%/36% as well as the precision of force control for 12%/32%, in intact-bodied subjects. Also, the magnitude and dispersion of the AE distribution were reduced. The results were similar in the amputee, showing considerable improvements. Significance. Using emgFB, the subjects therefore decreased the uncertainty of the forward pathway. Since there is a correspondence between the EMG and force, where the former anticipates the latter, the emgFB allowed for predictive control, as the subjects used the feedback to adjust the desired force even before the prosthesis contacted the object. In conclusion, the online emgFB was superior to the classic forceFB in realistic conditions that included electrotactile stimulation, limited feedback resolution (8 levels), cognitive processing delay, and time constraints (fast grasping).

  8. SAR operational aspects

    NASA Astrophysics Data System (ADS)

    Holmdahl, P. E.; Ellis, A. B. E.; Moeller-Olsen, P.; Ringgaard, J. P.

    1981-12-01

    The basic requirements of the SAR ground segment of ERS-1 are discussed. A system configuration for the real time data acquisition station and the processing and archive facility is depicted. The functions of a typical SAR processing unit (SPU) are specified, and inputs required for near real time and full precision, deferred time processing are described. Inputs and the processing required for provision of these inputs to the SPU are dealt with. Data flow through the systems, and normal and nonnormal operational sequence, are outlined. Prerequisites for maintaining overall performance are identified, emphasizing quality control. The most demanding tasks to be performed by the front end are defined in order to determine types of processors and peripherals which comply with throughput requirements.

  9. The Optical Field Angle Distortion Calibration of HST Fine Guidance Sensors 1R and 3

    NASA Technical Reports Server (NTRS)

    McArthur, B.; Benedict, G. F.; Jefferys, W. H.; Nelan, E.

    2006-01-01

    To date five OFAD (Optical Field Angle Distortion) calibrations have been performed with a star field in M35, four on FGS3 and one on FGS1, all analyzed by the Astrometry Science Team. We have recently completed an improved FGS1R OFAD calibration. The ongoing Long Term Stability Tests have also been analyzed and incorporated into these calibrations, which are time-dependent due to on-orbit changes in the FGS. Descriptions of these tests and the results of our OFAD modeling are given. Because all OFAD calibrations use the same star field, we calibrate FGS 1 and FGS 3 simultaneously. This increases the precision of our input catalog,resulting in an improvement in both the FGS 1 and FGS 3 calibrations. A redetermination of the proper motions,using 12 years of HST data has significantly improved our calibration. Residuals to our OFAD modeling indicate that FGS 1 will provide astrometry superior to FGS 3 by approx. 20%. Past and future FGS astrometric science supported by these calibrations is briefly reviewed.

  10. Robust integral variable structure controller and pulse-width pulse-frequency modulated input shaper design for flexible spacecraft with mismatched uncertainty/disturbance.

    PubMed

    Hu, Qinglei

    2007-10-01

    This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.

  11. High-speed digital signal normalization for feature identification

    NASA Technical Reports Server (NTRS)

    Ortiz, J. A.; Meredith, B. D.

    1983-01-01

    A design approach for high speed normalization of digital signals was developed. A reciprocal look up table technique is employed, where a digital value is mapped to its reciprocal via a high speed memory. This reciprocal is then multiplied with an input signal to obtain the normalized result. Normalization improves considerably the accuracy of certain feature identification algorithms. By using the concept of pipelining the multispectral sensor data processing rate is limited only by the speed of the multiplier. The breadboard system was found to operate at an execution rate of five million normalizations per second. This design features high precision, a reduced hardware complexity, high flexibility, and expandability which are very important considerations for spaceborne applications. It also accomplishes a high speed normalization rate essential for real time data processing.

  12. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  13. Supervised learning with decision margins in pools of spiking neurons.

    PubMed

    Le Mouel, Charlotte; Harris, Kenneth D; Yger, Pierre

    2014-10-01

    Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.

  14. Comment on ``Symmetry and structure of quantized vortices in superfluid 3'

    NASA Astrophysics Data System (ADS)

    Sauls, J. A.; Serene, J. W.

    1985-10-01

    Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.

  15. Safety and Certification Considerations for Expanding the Use of UAS in Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Maddalon, Jeffrey M.; Neogi, Natasha A.; Vertstynen, Harry A.

    2016-01-01

    The agricultural community is actively engaged in adopting new technologies such as unmanned aircraft systems (UAS) to help assess the condition of crops and develop appropriate treatment plans. In the United States, agricultural use of UAS has largely been limited to small UAS, generally weighing less than 55 lb and operating within the line of sight of a remote pilot. A variety of small UAS are being used to monitor and map crops, while only a few are being used to apply agricultural inputs based on the results of remote sensing. Larger UAS with substantial payload capacity could provide an option for site-specific application of agricultural inputs in a timely fashion, without substantive damage to the crops or soil. A recent study by the National Aeronautics and Space Administration (NASA) investigated certification requirements needed to enable the use of larger UAS to support the precision agriculture industry. This paper provides a brief introduction to aircraft certification relevant to agricultural UAS, an overview of and results from the NASA study, and a discussion of how those results might affect the precision agriculture community. Specific topics of interest include business model considerations for unmanned aerial applicators and a comparison with current means of variable rate application. The intent of the paper is to inform the precision agriculture community of evolving technologies that will enable broader use of unmanned vehicles to reduce costs, reduce environmental impacts, and enhance yield, especially for specialty crops that are grown on small to medium size farms.

  16. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    PubMed

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  17. An Improved Method of Pose Estimation for Lighthouse Base Station Extension

    PubMed Central

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-01-01

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509

  18. Impact of a Ground Network of Miniaturized Laser Heterodyne Radiometers (mini-LHRs) on Global Carbon Flux Estimates

    NASA Astrophysics Data System (ADS)

    DiGregorio, A.; Wilson, E. L.; Palmer, P. I.; Mao, J.; Feng, L.

    2017-12-01

    We present the simulated impact of a small (50 instrument) ground network of NASA Goddard Space Flight Center's miniaturized laser heterodyne radiometer (mini-LHR), a small, low cost ( 50k), portable, and high precision CH4 and CO2 measuring instrument. Partnered with AERONET as a non-intrusive accessory, the mini-LHR is able to leverage the 500+ instrument AERONET network for rapid network deployment and testing, and simultaneously retrieve co-located aerosol data, an important input for sattelite measurements. This observing systems simulation experiment (OSSE) uses the 3-D GEOS-Chem chemistry transport model and 50 strategically selected sites to model flux estimate uncertainty reduction of both TCCON and mini-LHR instruments. We found that 50 mini-LHR sites are capable of improving global uncertainty by up to 70%, with local improvements in the Southern Hemisphere reaching to 90%. Our studies show that addition of the mini-LHR to current ground networks will play a major role in reduction of global carbon flux uncertainty.

  19. Computer modelling of cyclic deformation of high-temperature materials. Technical progress report, 1 September-30 November 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duesbery, M.S.

    1993-11-30

    This program aims at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-sale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can bemore » transported by systematic procedures to the unknown macro-scale region. It may mot be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism. Restriction of empiricism to the choice of parameters to be input to known functional forms permits some confidence in extrapolation procedures and has the advantage that the models can readily be updated as better estimates of the parameters become available.« less

  20. Design, experiments and simulation of voltage transformers on the basis of a differential input D-dot sensor.

    PubMed

    Wang, Jingang; Gao, Can; Yang, Jie

    2014-07-17

    Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid.

  1. Information Management Platform for Data Analytics and Aggregation (IMPALA) System Design Document

    NASA Technical Reports Server (NTRS)

    Carnell, Andrew; Akinyelu, Akinyele

    2016-01-01

    The System Design document tracks the design activities that are performed to guide the integration, installation, verification, and acceptance testing of the IMPALA Platform. The inputs to the design document are derived from the activities recorded in Tasks 1 through 6 of the Statement of Work (SOW), with the proposed technical solution being the completion of Phase 1-A. With the documentation of the architecture of the IMPALA Platform and the installation steps taken, the SDD will be a living document, capturing the details about capability enhancements and system improvements to the IMPALA Platform to provide users in development of accurate and precise analytical models. The IMPALA Platform infrastructure team, data architecture team, system integration team, security management team, project manager, NASA data scientists and users are the intended audience of this document. The IMPALA Platform is an assembly of commercial-off-the-shelf (COTS) products installed on an Apache-Hadoop platform. User interface details for the COTS products will be sourced from the COTS tools vendor documentation. The SDD is a focused explanation of the inputs, design steps, and projected outcomes of every design activity for the IMPALA Platform through installation and validation.

  2. Identification of individualised empirical models of carbohydrate and insulin effects on T1DM blood glucose dynamics

    NASA Astrophysics Data System (ADS)

    Cescon, Marzia; Johansson, Rolf; Renard, Eric; Maran, Alberto

    2014-07-01

    One of the main limiting factors in improving glucose control for type 1 diabetes mellitus (T1DM) subjects is the lack of a precise description of meal and insulin intake effects on blood glucose. Knowing the magnitude and duration of such effects would be useful not only for patients and physicians, but also for the development of a controller targeting glycaemia regulation. Therefore, in this paper we focus on estimating low-complexity yet physiologically sound and individualised multi-input single-output (MISO) models of the glucose metabolism in T1DM able to reflect the basic dynamical features of the glucose-insulin metabolic system in response to a meal intake or an insulin injection. The models are continuous-time second-order transfer functions relating the amount of carbohydrate of a meal and the insulin units of the accordingly administered dose (inputs) to plasma glucose evolution (output) and consist of few parameters clinically relevant to be estimated. The estimation strategy is continuous-time data-driven system identification and exploits a database in which meals and insulin boluses are separated in time, allowing the unique identification of the model parameters.

  3. Neural computing thermal comfort index PMV for the indoor environment intelligent control system

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Chen, Yifei

    2013-03-01

    Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.

  4. Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model

    PubMed Central

    Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon

    2015-01-01

    In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172

  5. Different phase delays of peripheral input to primate motor cortex and spinal cord promote cancellation at physiological tremor frequencies

    PubMed Central

    Koželj, Saša

    2014-01-01

    Neurons in the spinal cord and motor cortex (M1) are partially phase-locked to cycles of physiological tremor, but with opposite phases. Convergence of spinal and cortical activity onto motoneurons may thus produce phase cancellation and a reduction in tremor amplitude. The mechanisms underlying this phase difference are unknown. We investigated coherence between spinal and M1 activity with sensory input. In two anesthetized monkeys, we electrically stimulated the medial, ulnar, deep radial, and superficial radial nerves; stimuli were timed as independent Poisson processes (rate 10 Hz). Single units were recorded from M1 (147 cells) or cervical spinal cord (61 cells). Ninety M1 cells were antidromically identified as pyramidal tract neurons (PTNs); M1 neurons were additionally classified according to M1 subdivision (rostral/caudal, M1r/c). Spike-stimulus coherence analysis revealed significant coupling over a broad range of frequencies, with the strongest coherence at <50 Hz. Delays implied by the slope of the coherence phase-frequency relationship were greater than the response onset latency, reflecting the importance of late response components for the transmission of oscillatory inputs. The spike-stimulus coherence phase over the 6–13 Hz physiological tremor band differed significantly between M1 and spinal cells (phase differences relative to the cord of 2.72 ± 0.29 and 1.72 ± 0.37 radians for PTNs from M1c and M1r, respectively). We conclude that different phases of the response to peripheral input could partially underlie antiphase M1 and spinal cord activity during motor behavior. The coordinated action of spinal and cortical feedback will act to reduce tremulous oscillations, possibly improving the overall stability and precision of motor control. PMID:24572094

  6. Different phase delays of peripheral input to primate motor cortex and spinal cord promote cancellation at physiological tremor frequencies.

    PubMed

    Koželj, Saša; Baker, Stuart N

    2014-05-01

    Neurons in the spinal cord and motor cortex (M1) are partially phase-locked to cycles of physiological tremor, but with opposite phases. Convergence of spinal and cortical activity onto motoneurons may thus produce phase cancellation and a reduction in tremor amplitude. The mechanisms underlying this phase difference are unknown. We investigated coherence between spinal and M1 activity with sensory input. In two anesthetized monkeys, we electrically stimulated the medial, ulnar, deep radial, and superficial radial nerves; stimuli were timed as independent Poisson processes (rate 10 Hz). Single units were recorded from M1 (147 cells) or cervical spinal cord (61 cells). Ninety M1 cells were antidromically identified as pyramidal tract neurons (PTNs); M1 neurons were additionally classified according to M1 subdivision (rostral/caudal, M1r/c). Spike-stimulus coherence analysis revealed significant coupling over a broad range of frequencies, with the strongest coherence at <50 Hz. Delays implied by the slope of the coherence phase-frequency relationship were greater than the response onset latency, reflecting the importance of late response components for the transmission of oscillatory inputs. The spike-stimulus coherence phase over the 6-13 Hz physiological tremor band differed significantly between M1 and spinal cells (phase differences relative to the cord of 2.72 ± 0.29 and 1.72 ± 0.37 radians for PTNs from M1c and M1r, respectively). We conclude that different phases of the response to peripheral input could partially underlie antiphase M1 and spinal cord activity during motor behavior. The coordinated action of spinal and cortical feedback will act to reduce tremulous oscillations, possibly improving the overall stability and precision of motor control. Copyright © 2014 the American Physiological Society.

  7. 3D-templated, fully automated microfluidic input/output multiplexer for endocrine tissue culture and secretion sampling.

    PubMed

    Li, Xiangpeng; Brooks, Jessica C; Hu, Juan; Ford, Katarena I; Easley, Christopher J

    2017-01-17

    A fully automated, 16-channel microfluidic input/output multiplexer (μMUX) has been developed for interfacing to primary cells and to improve understanding of the dynamics of endocrine tissue function. The device utilizes pressure driven push-up valves for precise manipulation of nutrient input and hormone output dynamics, allowing time resolved interrogation of the cells. The ability to alternate any of the 16 channels from input to output, and vice versa, provides for high experimental flexibility without the need to alter microchannel designs. 3D-printed interface templates were custom designed to sculpt the above-channel polydimethylsiloxane (PDMS) in microdevices, creating millimeter scale reservoirs and confinement chambers to interface primary murine islets and adipose tissue explants to the μMUX sampling channels. This μMUX device and control system was first programmed for dynamic studies of pancreatic islet function to collect ∼90 minute insulin secretion profiles from groups of ∼10 islets. The automated system was also operated in temporal stimulation and cell imaging mode. Adipose tissue explants were exposed to a temporal mimic of post-prandial insulin and glucose levels, while simultaneous switching between labeled and unlabeled free fatty acid permitted fluorescent imaging of fatty acid uptake dynamics in real time over a ∼2.5 hour period. Application with varying stimulation and sampling modes on multiple murine tissue types highlights the inherent flexibility of this novel, 3D-templated μMUX device. The tissue culture reservoirs and μMUX control components presented herein should be adaptable as individual modules in other microfluidic systems, such as organ-on-a-chip devices, and should be translatable to different tissues such as liver, heart, skeletal muscle, and others.

  8. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  9. Detecting bladder fullness through the ensemble activity patterns of the spinal cord unit population in a somatovisceral convergence environment.

    PubMed

    Park, Jae Hong; Kim, Chang-Eop; Shin, Jaewoo; Im, Changkyun; Koh, Chin Su; Seo, In Seok; Kim, Sang Jeong; Shin, Hyung-Cheul

    2013-10-01

    Chronic monitoring of the state of the bladder can be used to notify patients with urinary dysfunction when the bladder should be voided. Given that many spinal neurons respond both to somatic and visceral inputs, it is necessary to extract bladder information selectively from the spinal cord. Here, we hypothesize that sensory information with distinct modalities should be represented by the distinct ensemble activity patterns within the neuronal population and, therefore, analyzing the activity patterns of the neuronal population could distinguish bladder fullness from somatic stimuli. We simultaneously recorded 26-27 single unit activities in response to bladder distension or tactile stimuli in the dorsal spinal cord of each Sprague-Dawley rat. In order to discriminate between bladder fullness and tactile stimulus inputs, we analyzed the ensemble activity patterns of the entire neuronal population. A support vector machine (SVM) was employed as a classifier, and discrimination performance was measured by k-fold cross-validation tests. Most of the units responding to bladder fullness also responded to the tactile stimuli (88.9-100%). The SVM classifier precisely distinguished the bladder fullness from the somatic input (100%), indicating that the ensemble activity patterns of the unit population in the spinal cord are distinct enough to identify the current input modality. Moreover, our ensemble activity pattern-based classifier showed high robustness against random losses of signals. This study is the first to demonstrate that the two main issues of electroneurographic monitoring of bladder fullness, low signals and selectiveness, can be solved by an ensemble activity pattern-based approach, improving the feasibility of chronic monitoring of bladder fullness by neural recording.

  10. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  11. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  12. Supervised interpretation of echocardiograms with a psychological model of expert supervision

    NASA Astrophysics Data System (ADS)

    Revankar, Shriram V.; Sher, David B.; Shalin, Valerie L.; Ramamurthy, Maya

    1993-07-01

    We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.

  13. Virial Coefficients and Equations of State for Hard Polyhedron Fluids.

    PubMed

    Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C

    2017-10-24

    Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.

  14. Airborne Precision Spacing for Dependent Parallel Operations Interface Study

    NASA Technical Reports Server (NTRS)

    Volk, Paul M.; Takallu, M. A.; Hoffler, Keith D.; Weiser, Jarold; Turner, Dexter

    2012-01-01

    This paper describes a usability study of proposed cockpit interfaces to support Airborne Precision Spacing (APS) operations for aircraft performing dependent parallel approaches (DPA). NASA has proposed an airborne system called Pair Dependent Speed (PDS) which uses their Airborne Spacing for Terminal Arrival Routes (ASTAR) algorithm to manage spacing intervals. Interface elements were designed to facilitate the input of APS-DPA spacing parameters to ASTAR, and to convey PDS system information to the crew deemed necessary and/or helpful to conduct the operation, including: target speed, guidance mode, target aircraft depiction, and spacing trend indication. In the study, subject pilots observed recorded simulations using the proposed interface elements in which the ownship managed assigned spacing intervals from two other arriving aircraft. Simulations were recorded using the Aircraft Simulation for Traffic Operations Research (ASTOR) platform, a medium-fidelity simulator based on a modern Boeing commercial glass cockpit. Various combinations of the interface elements were presented to subject pilots, and feedback was collected via structured questionnaires. The results of subject pilot evaluations show that the proposed design elements were acceptable, and that preferable combinations exist within this set of elements. The results also point to potential improvements to be considered for implementation in future experiments.

  15. Quantum Discord Determines the Interferometric Power of Quantum States

    NASA Astrophysics Data System (ADS)

    Girolami, Davide; Souza, Alexandre M.; Giovannetti, Vittorio; Tufarelli, Tommaso; Filgueiras, Jefferson G.; Sarthour, Roberto S.; Soares-Pinto, Diogo O.; Oliveira, Ivan S.; Adesso, Gerardo

    2014-05-01

    Quantum metrology exploits quantum mechanical laws to improve the precision in estimating technologically relevant parameters such as phase, frequency, or magnetic fields. Probe states are usually tailored to the particular dynamics whose parameters are being estimated. Here we consider a novel framework where quantum estimation is performed in an interferometric configuration, using bipartite probe states prepared when only the spectrum of the generating Hamiltonian is known. We introduce a figure of merit for the scheme, given by the worst-case precision over all suitable Hamiltonians, and prove that it amounts exactly to a computable measure of discord-type quantum correlations for the input probe. We complement our theoretical results with a metrology experiment, realized in a highly controllable room-temperature nuclear magnetic resonance setup, which provides a proof-of-concept demonstration for the usefulness of discord in sensing applications. Discordant probes are shown to guarantee a nonzero phase sensitivity for all the chosen generating Hamiltonians, while classically correlated probes are unable to accomplish the estimation in a worst-case setting. This work establishes a rigorous and direct operational interpretation for general quantum correlations, shedding light on their potential for quantum technology.

  16. Optical harmonic generator

    DOEpatents

    Summers, M.A.; Eimerl, D.; Boyd, R.D.

    1982-06-10

    A pair of uniaxial birefringent crystal elements are fixed together to form a serially arranged, integral assembly which, alternatively, provides either a linearly or elliptically polarized second-harmonic output wave or a linearly polarized third-harmonic output wave. The extraordinary or e directions of the crystal elements are oriented in the integral assembly to be in quadrature (90/sup 0/). For a second-harmonic generation in the Type-II-Type-II angle tuned case, the input fundamental wave has equal amplitude o and e components. For a third-harmonic generation, the input fundamental wave has o and e components whose amplitudes are in a ratio of 2:1 (o:e reference first crystal). In the typical case of a linearly polarized input fundamental wave this can be accomplished by simply rotating the crystal assembly about the input beam direction by 10/sup 0/. For both second and third harmonic generation input precise phase-matching is achieved by tilting the crystal assembly about its two sensitive axeses (o).

  17. Homeostatic plasticity shapes cell-type-specific wiring in the retina

    PubMed Central

    Tien, Nai-Wen; Soto, Florentina; Kerschensteiner, Daniel

    2017-01-01

    SUMMARY Convergent input from different presynaptic partners shapes the responses of postsynaptic neurons. Whether developing postsynaptic neurons establish connections with each presynaptic partner independently, or balance inputs to attain specific responses is unclear. Retinal ganglion cells (RGCs) receive convergent input from bipolar cell types with different contrast responses and temporal tuning. Here, using optogenetic activation and pharmacogenetic silencing, we found that type 6 bipolar cells (B6) dominate excitatory input to ONα-RGCs. We generated mice in which B6 cells were selectively removed from developing circuits (B6-DTA). In B6-DTA mice, ONα-RGCs adjusted connectivity with other bipolar cells in a cell-type-specific manner. They recruited new partners, increased synapses with some existing partners, and maintained constant input from others. Patch clamp recordings revealed that anatomical rewiring precisely preserved contrast- and temporal frequency response functions of ONα-RGCs, indicating that homeostatic plasticity shapes cell-type-specific wiring in the developing retina to stabilize visual information sent to the brain. PMID:28457596

  18. Optical harmonic generator

    DOEpatents

    Summers, Mark A.; Eimerl, David; Boyd, Robert D.

    1985-01-01

    A pair of uniaxial birefringent crystal elements are fixed together to form a serially arranged, integral assembly which, alternatively, provides either a linearly or elliptically polarized second-harmonic output wave or a linearly polarized third-harmonic output wave. The "extraordinary" or "e" directions of the crystal elements are oriented in the integral assembly to be in quadrature (90.degree.). For a second-harmonic generation in the Type-II-Type-II angle tuned case, the input fundamental wave has equal amplitude "o" and "e" components. For a third-harmonic generation, the input fundamental wave has "o" and "e" components whose amplitudes are in a ratio of 2:1 ("o":"e" reference first crystal). In the typical case of a linearly polarized input fundamental wave this can be accomplished by simply rotating the crystal assembly about the input beam direction by 10.degree.. For both second and third harmonic generation input precise phase-matching is achieved by tilting the crystal assembly about its two sensitive axes ("o").

  19. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  20. Logarithmic amplifiers.

    PubMed

    Gandler, W; Shapiro, H

    1990-01-01

    Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.

  1. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  2. Comparative study of pulsed Nd:YAG laser welding of AISI 304 and AISI 316 stainless steels

    NASA Astrophysics Data System (ADS)

    Kumar, Nikhil; Mukherjee, Manidipto; Bandyopadhyay, Asish

    2017-02-01

    Laser welding is a potentially useful technique for joining two pieces of similar or dissimilar materials with high precision. In the present work, comparative studies on laser welding of similar metal of AISI 304SS and AISI 316SS have been conducted forming butt joints. A robotic control 600 W pulsed Nd:YAG laser source has been used for welding purpose. The effects of laser power, scanning speed and pulse width on the ultimate tensile strength and weld width have been investigated using the empirical models developed by RSM. The results of ANOVA indicate that the developed models predict the responses adequately within the limits of input parameters. 3-D response surface and contour plots have been developed to find out the combined effects of input parameters on responses. Furthermore, microstructural analysis as well as hardness and tensile behavior of the selected weld of 304SS and 316SS have been carried out to understand the metallurgical and mechanical behavior of the weld. The selection criteria are based on the maximum and minimum strength achieved by the respective weld. It has been observed that the current pulsation, base metal composition and variation in heat input have significant influence on controlling the microstructural constituents (i.e. phase fraction, grain size etc.). The result suggests that the low energy input pulsation generally produce fine grain structure and improved mechanical properties than the high energy input pulsation irrespective of base material composition. However, among the base materials, 304SS depict better microstructural and mechanical properties than the 316SS for a given parametric condition. Finally, desirability function analysis has been applied for multi-objective optimization for maximization of ultimate tensile strength and minimization of weld width simultaneously. Confirmatory tests have been conducted at optimum parametric conditions to validate the optimization techniques.

  3. Fully Self-Contained Vision-Aided Navigation and Landing of a Micro Air Vehicle Independent from External Sensor Inputs

    NASA Technical Reports Server (NTRS)

    Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry

    2012-01-01

    Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.

  4. Minimum data requirement for neural networks based on power spectral density analysis.

    PubMed

    Deng, Jiamei; Maass, Bastian; Stobart, Richard

    2012-04-01

    One of the most critical challenges ahead for diesel engines is to identify new techniques for fuel economy improvement without compromising emissions regulations. One technique is the precise control of air/fuel ratio, which requires the measurement of instantaneous fuel consumption. Measurement accuracy and repeatability for fuel rate is the key to successfully controlling the air/fuel ratio and real-time measurement of fuel consumption. The volumetric and gravimetric measurement principles are well-known methods for measurement of fuel consumption in internal combustion engines. However, the fuel flow rate measured by these methods is not suitable for either real-time control or real-time measurement purposes because of the intermittent nature of the measurements. This paper describes a technique that can be used to find the minimum data [consisting of data from just 2.5% of the non-road transient cycle (NRTC)] to solve the problem concerning discontinuous data of fuel flow rate measured using an AVL 733S fuel meter for a medium or heavy-duty diesel engine using neural networks. Only torque and speed are used as the input parameters for the fuel flow rate prediction. Power density analysis is used to find the minimum amount of the data. The results show that the nonlinear autoregressive model with exogenous inputs could predict the particulate matter successfully with R(2) above 0.96 using 2.5% NRTC data with only torque and speed as inputs.

  5. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics.

    PubMed

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-12-10

    Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. While achieving an identical recall, the meta-search engine showed a precision of 77.26% (+/-14.45) compared to the individual search engines' 52.65% (+/-12.0) (p < 0.001). The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.

  6. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    PubMed Central

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-01-01

    Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p < 0.001). Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311

  7. High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft

    NASA Astrophysics Data System (ADS)

    Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying

    2018-02-01

    In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.

  8. Off-set stabilizer for comparator output

    DOEpatents

    Lunsford, James S.

    1991-01-01

    A stabilized off-set voltage is input as the reference voltage to a comparator. In application to a time-interval meter, the comparator output generates a timing interval which is independent of drift in the initial voltage across the timing capacitor. A precision resistor and operational amplifier charge a capacitor to a voltage which is precisely offset from the initial voltage. The capacitance of the reference capacitor is selected so that substantially no voltage drop is obtained in the reference voltage applied to the comparator during the interval to be measured.

  9. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  10. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  11. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  12. Cerebellar input configuration toward object model abstraction in manipulation tasks.

    PubMed

    Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo

    2011-08-01

    It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.

  13. A high sensitive 66 dB linear dynamic range receiver for 3-D laser radar

    NASA Astrophysics Data System (ADS)

    Ma, Rui; Zheng, Hao; Zhu, Zhangming

    2017-08-01

    This study presents a CMOS receiver chip realized in 0.18 μm standard CMOS technology and intended for high precision 3-D laser radar. The chip includes an adjustable gain transimpedance pre-amplifier, a post-amplifier and two timing comparators. An additional feedback is employed in the regulated cascode transimpedance amplifier to decrease the input impedance, and a variable gain transimpedance amplifier controlled by digital switches and analog multiplexer is utilized to realize four gain modes, extending the input dynamic range. The measurement shows that the highest transimpedance of the channel is 50 k {{Ω }}, the uncompensated walk error is 1.44 ns in a wide linear dynamic range of 66 dB (1:2000), and the input referred noise current is 2.3 pA/\\sqrt{{Hz}} (rms), resulting in a very low detectable input current of 1 μA with SNR = 5.

  14. 60 V tolerance full symmetrical switch for battery monitor IC

    NASA Astrophysics Data System (ADS)

    Zhang, Qidong; Yang, Yintang; Chai, Changchun

    2017-06-01

    For stacked battery monitoring IC high speed and high precision voltage acquisition requirements, this paper introduces a kind of symmetrical type high voltage switch circuit. This kind of switch circuit uses the voltage following structure, which eliminates the leakage path of input signals. At the same time, this circuit adopts a high speed charge pump structure, in any case the input signal voltage is higher than the supply voltage, it can fast and accurately turn on high voltage MOS devices, and convert the battery voltage to an analog to digital converter. The proposed high voltage full symmetry switch has been implemented in a 0.18 μm BCD process; simulated and measured results show that the proposed switch can always work properly regardless of the polarity of the voltage difference between the input signal ports and an input signal higher than the power supply. Project supported by the National Natural Science Foundation of China (No. 61334003).

  15. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to determine. We use one of the largest aftershocks that can be located by conventional methods as our reference event to calibrate the controlling parameters of ISSA. These parameters include the overall Vp/Vs ratio (because a precise S velocity model was unavailable), the length of scanning time window, and the weighting factor for each station. Our results show that ISSA is not only more efficient in locating earthquake clusters/aftershocks, but also capable of identifying many events missed by conventional phase-picking methods.

  16. Supporting inter-topic entity search for biomedical Linked Data based on heterogeneous relationships.

    PubMed

    Zong, Nansu; Lee, Sungin; Ahn, Jinhyun; Kim, Hong-Gee

    2017-08-01

    The keyword-based entity search restricts search space based on the preference of search. When given keywords and preferences are not related to the same biomedical topic, existing biomedical Linked Data search engines fail to deliver satisfactory results. This research aims to tackle this issue by supporting an inter-topic search-improving search with inputs, keywords and preferences, under different topics. This study developed an effective algorithm in which the relations between biomedical entities were used in tandem with a keyword-based entity search, Siren. The algorithm, PERank, which is an adaptation of Personalized PageRank (PPR), uses a pair of input: (1) search preferences, and (2) entities from a keyword-based entity search with a keyword query, to formalize the search results on-the-fly based on the index of the precomputed Individual Personalized PageRank Vectors (IPPVs). Our experiments were performed over ten linked life datasets for two query sets, one with keyword-preference topic correspondence (intra-topic search), and the other without (inter-topic search). The experiments showed that the proposed method achieved better search results, for example a 14% increase in precision for the inter-topic search than the baseline keyword-based search engine. The proposed method improved the keyword-based biomedical entity search by supporting the inter-topic search without affecting the intra-topic search based on the relations between different entities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Genome editing for crop improvement: Challenges and opportunities

    PubMed Central

    Abdallah, Naglaa A; Prakash, Channapatna S; McHughen, Alan G

    2015-01-01

    ABSTRACT Genome or gene editing includes several new techniques to help scientists precisely modify genome sequences. The techniques also enables us to alter the regulation of gene expression patterns in a pre-determined region and facilitates novel insights into the functional genomics of an organism. Emergence of genome editing has brought considerable excitement especially among agricultural scientists because of its simplicity, precision and power as it offers new opportunities to develop improved crop varieties with clear-cut addition of valuable traits or removal of undesirable traits. Research is underway to improve crop varieties with higher yields, strengthen stress tolerance, disease and pest resistance, decrease input costs, and increase nutritional value. Genome editing encompasses a wide variety of tools using either a site-specific recombinase (SSR) or a site-specific nuclease (SSN) system. Both systems require recognition of a known sequence. The SSN system generates single or double strand DNA breaks and activates endogenous DNA repair pathways. SSR technology, such as Cre/loxP and Flp/FRT mediated systems, are able to knockdown or knock-in genes in the genome of eukaryotes, depending on the orientation of the specific sites (loxP, FLP, etc.) flanking the target site. There are 4 main classes of SSN developed to cleave genomic sequences, mega-nucleases (homing endonuclease), zinc finger nucleases (ZFNs), transcriptional activator-like effector nucleases (TALENs), and the CRISPR/Cas nuclease system (clustered regularly interspaced short palindromic repeat/CRISPR-associated protein). The recombinase mediated genome engineering depends on recombinase (sub-) family and target-site and induces high frequencies of homologous recombination. Improving crops with gene editing provides a range of options: by altering only a few nucleotides from billions found in the genomes of living cells, altering the full allele or by inserting a new gene in a targeted region of the genome. Due to its precision, gene editing is more precise than either conventional crop breeding methods or standard genetic engineering methods. Thus this technology is a very powerful tool that can be used toward securing the world's food supply. In addition to improving the nutritional value of crops, it is the most effective way to produce crops that can resist pests and thrive in tough climates. There are 3 types of modifications produced by genome editing; Type I includes altering a few nucleotides, Type II involves replacing an allele with a pre-existing one and Type III allows for the insertion of new gene(s) in predetermined regions in the genome. Because most genome-editing techniques can leave behind traces of DNA alterations evident in a small number of nucleotides, crops created through gene editing could avoid the stringent regulation procedures commonly associated with GM crop development. For this reason many scientists believe plants improved with the more precise gene editing techniques will be more acceptable to the public than transgenic plants. With genome editing comes the promise of new crops being developed more rapidly with a very low risk of off-target effects. It can be performed in any laboratory with any crop, even those that have complex genomes and are not easily bred using conventional methods. PMID:26930114

  18. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  19. Combination of High-density Microelectrode Array and Patch Clamp Recordings to Enable Studies of Multisynaptic Integration.

    PubMed

    Jäckel, David; Bakkum, Douglas J; Russell, Thomas L; Müller, Jan; Radivojevic, Milos; Frey, Urs; Franke, Felix; Hierlemann, Andreas

    2017-04-20

    We present a novel, all-electric approach to record and to precisely control the activity of tens of individual presynaptic neurons. The method allows for parallel mapping of the efficacy of multiple synapses and of the resulting dynamics of postsynaptic neurons in a cortical culture. For the measurements, we combine an extracellular high-density microelectrode array, featuring 11'000 electrodes for extracellular recording and stimulation, with intracellular patch-clamp recording. We are able to identify the contributions of individual presynaptic neurons - including inhibitory and excitatory synaptic inputs - to postsynaptic potentials, which enables us to study dendritic integration. Since the electrical stimuli can be controlled at microsecond resolution, our method enables to evoke action potentials at tens of presynaptic cells in precisely orchestrated sequences of high reliability and minimum jitter. We demonstrate the potential of this method by evoking short- and long-term synaptic plasticity through manipulation of multiple synaptic inputs to a specific neuron.

  20. Development of high precision digital driver of acoustic-optical frequency shifter for ROG

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Kong, Mei; Xu, Yameng

    2016-10-01

    We develop a high precision digital driver of the acoustic-optical frequency shifter (AOFS) based on the parallel direct digital synthesizer (DDS) technology. We use an atomic clock as the phase-locked loop (PLL) reference clock, and the PLL is realized by a dual digital phase-locked loop. A DDS sampling clock up to 320 MHz with a frequency stability as low as 10-12 Hz is obtained. By constructing the RF signal measurement system, it is measured that the frequency output range of the AOFS-driver is 52-58 MHz, the center frequency of the band-pass filter is 55 MHz, the ripple in the band is less than 1 dB@3MHz, the single channel output power is up to 0.3 W, the frequency stability is 1 ppb (1 hour duration), and the frequency-shift precision is 0.1 Hz. The obtained frequency stability has two orders of improvement compared to that of the analog AOFS-drivers. For the designed binary frequency shift keying (2-FSK) and binary phase shift keying (2-PSK) modulation system, the demodulating frequency of the input TTL synchronous level signal is up to 10 kHz. The designed digital-bus coding/decoding system is compatible with many conventional digital bus protocols. It can interface with the ROG signal detecting software through the integrated drive electronics (IDE) and exchange data with the two DDS frequency-shift channels through the signal detecting software.

  1. An open-source framework for large-scale, flexible evaluation of biomedical text mining systems.

    PubMed

    Baumgartner, William A; Cohen, K Bretonnel; Hunter, Lawrence

    2008-01-29

    Improved evaluation methodologies have been identified as a necessary prerequisite to the improvement of text mining theory and practice. This paper presents a publicly available framework that facilitates thorough, structured, and large-scale evaluations of text mining technologies. The extensibility of this framework and its ability to uncover system-wide characteristics by analyzing component parts as well as its usefulness for facilitating third-party application integration are demonstrated through examples in the biomedical domain. Our evaluation framework was assembled using the Unstructured Information Management Architecture. It was used to analyze a set of gene mention identification systems involving 225 combinations of system, evaluation corpus, and correctness measure. Interactions between all three were found to affect the relative rankings of the systems. A second experiment evaluated gene normalization system performance using as input 4,097 combinations of gene mention systems and gene mention system-combining strategies. Gene mention system recall is shown to affect gene normalization system performance much more than does gene mention system precision, and high gene normalization performance is shown to be achievable with remarkably low levels of gene mention system precision. The software presented in this paper demonstrates the potential for novel discovery resulting from the structured evaluation of biomedical language processing systems, as well as the usefulness of such an evaluation framework for promoting collaboration between developers of biomedical language processing technologies. The code base is available as part of the BioNLP UIMA Component Repository on SourceForge.net.

  2. An open-source framework for large-scale, flexible evaluation of biomedical text mining systems

    PubMed Central

    Baumgartner, William A; Cohen, K Bretonnel; Hunter, Lawrence

    2008-01-01

    Background Improved evaluation methodologies have been identified as a necessary prerequisite to the improvement of text mining theory and practice. This paper presents a publicly available framework that facilitates thorough, structured, and large-scale evaluations of text mining technologies. The extensibility of this framework and its ability to uncover system-wide characteristics by analyzing component parts as well as its usefulness for facilitating third-party application integration are demonstrated through examples in the biomedical domain. Results Our evaluation framework was assembled using the Unstructured Information Management Architecture. It was used to analyze a set of gene mention identification systems involving 225 combinations of system, evaluation corpus, and correctness measure. Interactions between all three were found to affect the relative rankings of the systems. A second experiment evaluated gene normalization system performance using as input 4,097 combinations of gene mention systems and gene mention system-combining strategies. Gene mention system recall is shown to affect gene normalization system performance much more than does gene mention system precision, and high gene normalization performance is shown to be achievable with remarkably low levels of gene mention system precision. Conclusion The software presented in this paper demonstrates the potential for novel discovery resulting from the structured evaluation of biomedical language processing systems, as well as the usefulness of such an evaluation framework for promoting collaboration between developers of biomedical language processing technologies. The code base is available as part of the BioNLP UIMA Component Repository on SourceForge.net. PMID:18230184

  3. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  4. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  5. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  6. Step-control of electromechanical systems

    DOEpatents

    Lewis, Robert N.

    1979-01-01

    The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.

  7. Proceedings of the Workshop on Improvements to Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J. (Editor); Young, A. T. (Editor)

    1984-01-01

    The purposes of the workshop were to determine what astronomical problems would benefit by increased photometric precision, determine the current level of precision, identify the processes limiting the precision, and recommend approaches to improving photometric precision. Twenty representatives of the university, industry, and government communities participated. Results and recommendations are discussed.

  8. Structural Basis of Cerebellar Microcircuits in the Rat

    PubMed Central

    Cerminara, Nadia L.; Aoki, Hanako; Loft, Michaela; Apps, Richard

    2013-01-01

    The topography of the cerebellar cortex is described by at least three different maps, with the basic units of each map termed “microzones,” “patches,” and “bands.” These are defined, respectively, by different patterns of climbing fiber input, mossy fiber input, and Purkinje cell (PC) phenotype. Based on embryological development, the “one-map” hypothesis proposes that the basic units of each map align in the adult animal and the aim of the present study was to test this possibility. In barbiturate anesthetized adult rats, nanoinjections of bidirectional tracer (Retrobeads and biotinylated dextran amine) were made into somatotopically identified regions within the hindlimb C1 zone in copula pyramidis. Injection sites were mapped relative to PC bands defined by the molecular marker zebrin II and were correlated with the pattern of retrograde cell labeling within the inferior olive and in the basilar pontine nuclei to determine connectivity of microzones and patches, respectively, and also with the distributions of biotinylated dextran amine-labeled PC terminals in the cerebellar nuclei. Zebrin bands were found to be related to both climbing fiber and mossy fiber inputs and also to cortical representation of different parts of the ipsilateral hindpaw, indicating a precise spatial organization within cerebellar microcircuitry. This precise connectivity extends to PC terminal fields in the cerebellar nuclei and olivonuclear projections. These findings strongly support the one-map hypothesis and suggest that, at the microcircuit level of resolution, the cerebellar cortex has a common plan of spatial organization for major inputs, outputs, and PC phenotype. PMID:24133249

  9. Assessing the radar rainfall estimates in watershed-scale water quality model

    USDA-ARS?s Scientific Manuscript database

    Watershed-scale water quality models are effective science-based tools for interpreting change in complex environmental systems that affect hydrology cycle, soil erosion and nutrient fate and transport in watershed. Precipitation is one of the primary input data to achieve a precise rainfall-runoff ...

  10. Development of a precise controller for an electrohydraulic total artificial heart. Improvement of the motor's dynamic response.

    PubMed

    Ahn, J M; Masuzawa, T; Taenaka, Y; Tatsumi, E; Ohno, T; Choi, W W; Toda, K; Miyazaki, K; Baba, Y; Nakatani, T; Takano, H; Min, B G

    1996-01-01

    In an electrohydraulic total artificial heart developed at the National Cardiovascular Center (Osaka, Japan), two blood pumps are pushed alternatively by means of the bidirectional motion of a brushless DC motor for pump systole and diastole. Improvement in the dynamic response of the motor is very important to obtain better pump performance; this was accomplished by using power electronic simulation. For the motor to have the desired dynamic response, it must be commutated properly and the damping ratio (zeta), which represents transient characteristics of the motor, must lie between 0.4 and 0.8. Consequently, all satisfactory specifications with respect to power consumption must be obtained. Based on the simulated results, the design criteria were determined and the precise controller designed to reduce torque ripple and motor vibration, and determine motor stop time at every direction change. In in vitro tests, evaluation of the controller and dynamic response of the motor was justified in terms of zeta, power consumption, and motor stop time. The results indicated that the power consumption of the controller and the input power of the motor were decreased by 1.2 and 2.5 W at zeta = 0.6, respectively, compared to the previous system. An acceptable dynamic response of the motor, necessary for the reduction of torque ripple and motor vibration, was obtained between zeta = 0.5 and zeta = 0.7, with an increase in system efficiency from 10% to 12%. The motor stop time required for stable motor reoperation was determined to be over 10 msec, for a savings in power consumption of approximately 1.5 W. Therefore, the improved dynamic response of the motor can contribute to the stability and reliability of the pump.

  11. Distributed MIMO chaotic radar based on wavelength-division multiplexing technology.

    PubMed

    Yao, Tingfeng; Zhu, Dan; Ben, De; Pan, Shilong

    2015-04-15

    A distributed multiple-input multiple-output chaotic radar based on wavelength-division multiplexing technology (WDM) is proposed and demonstrated. The wideband quasi-orthogonal chaotic signals generated by different optoelectronic oscillators (OEOs) are emitted by separated antennas to gain spatial diversity against the fluctuation of a target's radar cross section and enhance the detection capability. The received signals collected by the receive antennas and the reference signals from the OEOs are delivered to the central station for joint processing by exploiting WDM technology. The centralized signal processing avoids precise time synchronization of the distributed system and greatly simplifies the remote units, which improves the localization accuracy of the entire system. A proof-of-concept experiment for two-dimensional localization of a metal target is demonstrated. The maximum position error is less than 6.5 cm.

  12. Fixed-target hadron production experiments

    NASA Astrophysics Data System (ADS)

    Popov, Boris A.

    2015-08-01

    Results from fixed-target hadroproduction experiments (HARP, MIPP, NA49 and NA61/SHINE) as well as their implications for cosmic ray and neutrino physics are reviewed. HARP measurements have been used for predictions of neutrino beams in K2K and MiniBooNE/SciBooNE experiments and are also being used to improve predictions of the muon yields in EAS and of the atmospheric neutrino fluxes as well as to help in the optimization of neutrino factory and super-beam designs. Recent measurements released by the NA61/SHINE experiment are of significant importance for a precise prediction of the J-PARC neutrino beam used for the T2K experiment and for interpretation of EAS data. These hadroproduction experiments provide also a large amount of input for validation and tuning of hadron production models in Monte-Carlo generators.

  13. Search Filter Precision Can Be Improved By NOTing Out Irrelevant Content

    PubMed Central

    Wilczynski, Nancy L.; McKibbon, K. Ann; Haynes, R. Brian

    2011-01-01

    Background: Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content. Objective: To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target. Design: Analytic survey. Methods: NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database. Main Outcome Measures: Sensitivity, specificity, precision and number needed to read (NNR). Results: For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases). Conclusion: Search filter precision can be improved by NOTing out irrelevant content. PMID:22195215

  14. Explanation and elaboration of the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines, V.2.0: examples of SQUIRE elements in the healthcare improvement literature

    PubMed Central

    Goodman, Daisy; Ogrinc, Greg; Davies, Louise; Baker, G Ross; Barnsteiner, Jane; Foster, Tina C; Gali, Kari; Hilden, Joanne; Horwitz, Leora; Kaplan, Heather C; Leis, Jerome; Matulis, John C; Michie, Susan; Miltner, Rebecca; Neily, Julia; Nelson, William A; Niedner, Matthew; Oliver, Brant; Rutman, Lori; Thomson, Richard

    2016-01-01

    Since its publication in 2008, SQUIRE (Standards for Quality Improvement Reporting Excellence) has contributed to the completeness and transparency of reporting of quality improvement work, providing guidance to authors and reviewers of reports on healthcare improvement work. In the interim, enormous growth has occurred in understanding factors that influence the success, and failure, of healthcare improvement efforts. Progress has been particularly strong in three areas: the understanding of the theoretical basis for improvement work; the impact of contextual factors on outcomes; and the development of methodologies for studying improvement work. Consequently, there is now a need to revise the original publication guidelines. To reflect the breadth of knowledge and experience in the field, we solicited input from a wide variety of authors, editors and improvement professionals during the guideline revision process. This Explanation and Elaboration document (E&E) is a companion to the revised SQUIRE guidelines, SQUIRE 2.0. The product of collaboration by an international and interprofessional group of authors, this document provides examples from the published literature, and an explanation of how each reflects the intent of a specific item in SQUIRE. The purpose of the guidelines is to assist authors in writing clearly, precisely and completely about systematic efforts to improve the quality, safety and value of healthcare services. Authors can explore the SQUIRE statement, this E&E and related documents in detail at http://www.squire-statement.org. PMID:27076505

  15. [Research of input water ratio's impact on the quality of effluent water from hydrolysis reactor].

    PubMed

    Liang, Kang-Qiang; Xiong, Ya; Qi, Mao-Rong; Lin, Xiu-Jun; Zhu, Min; Song, Ying-Hao

    2012-11-01

    Based on high SS/BOD and low C/N ratio of waste water of municipal wastewater treatment plant, the structure of currently existing hydrolysis reactor was reformed to improve the influent quality. In order to strengthen the sludge hydrolysis and improve effluent water quality, two layers water distributors were set up so that the sludge hydrolysis zone was formed between the two layers distribution. For the purpose of the hydrolysis reactor not only plays the role of the primary sedimentation tank but also improves the effluent water biodegradability, input water ratios of the upper and lower water distributor in the experiment were changed to get the best input water ratio to guide the large-scale application of this sort hydrolysis reactor. Results show, four kinds of input water ratio have varying degrees COD and SS removal efficiency, however, input water ratio for 1 : 1 can substantially increase SCOD/COD ratio and VFA concentration of effluent water compared with the other three input water ratios. To improve the effluent biodegradability, input water ratio for 1 : 1 was chosen for the best input water ratio. That was the ratio of flow of upper distributor was 50%, and the ratio of the lower one was 50%, at this case it can reduce the processing burden of COD and SS for follow-up treatment, but also improve the biodegradability of the effluent.

  16. Experiments on Linguistically-Based Term Associations.

    ERIC Educational Resources Information Center

    Ruge, Gerda

    1992-01-01

    Describes the hyperterm system REALIST (Retrieval Aids by Linguistics and Statistics) with emphasis on its semantic component, which generates term relations from free-text input. Experiments with various similarity measures are discussed, and the quality of the associated terms is evaluated using term recall and term precision measures. (22…

  17. FORMED: Bringing Formal Methods to the Engineering Desktop

    DTIC Science & Technology

    2016-02-01

    integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification

  18. A Precision Nitrogen Management Approach to Minimize Impacts

    USDA-ARS?s Scientific Manuscript database

    Nitrogen fertilizer is a crucial input for crop production but contributes to agriculture’s environmental footprint via CO2 emissions, N2O emissions, and eutrophication of coastal waters. The low-cost way to minimize this impact is to eliminate over-application of N. This is more difficult than it s...

  19. Modeling erosion in a southern New Mexico watershed using agwa: Sensitivity to variations of input precision and scale

    USDA-ARS?s Scientific Manuscript database

    Rangeland environments are particularly susceptible to erosion due to extreme rainfall events and low vegetation cover. Landowners and managers need access to reliable erosion evaluation methods in order to protect productivity and hydrologic integrity of their rangelands and make resource allocati...

  20. User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.

    1982-01-01

    PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.

  1. Precision limits of lock-in amplifiers below unity signal-to-noise ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillies, G.T.; Allison, S.W.

    1986-02-01

    An investigation of noise-related performance limits of commercial-grade lock-in amplifiers has been carried out. The dependence of the output measurement error on the input signal-to-noise ratio was established in each case and measurements of noise-related gain variations were made.

  2. The microcomputer scientific software series 4: testing prediction accuracy.

    Treesearch

    H. Michael Rauscher

    1986-01-01

    A computer program, ATEST, is described in this combination user's guide / programmer's manual. ATEST provides users with an efficient and convenient tool to test the accuracy of predictors. As input ATEST requires observed-predicted data pairs. The output reports the two components of accuracy, bias and precision.

  3. Water deficit and nitrogen fertility effects on NDVI of 'Tifton 85' bermudagrass during regrowth

    USDA-ARS?s Scientific Manuscript database

    A better understanding of how bermudagrass (Cynodon spp.) regrowth is influenced by production inputs will aid in advancing precision management in the southeast US. The objective of this two-yr study was to evaluate how irrigation and nitrogen influence bermudagrass regrowth. Normalized difference ...

  4. Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, S.

    This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less

  5. Thermal effects in the Input Optics of the Enhanced Laser Interferometer Gravitational-Wave Observatory interferometers.

    PubMed

    Dooley, Katherine L; Arain, Muzammil A; Feldbaum, David; Frolov, Valery V; Heintze, Matthew; Hoak, Daniel; Khazanov, Efim A; Lucianetti, Antonio; Martin, Rodica M; Mueller, Guido; Palashov, Oleg; Quetschke, Volker; Reitze, David H; Savage, R L; Tanner, D B; Williams, Luke F; Wu, Wan

    2012-03-01

    We present the design and performance of the LIGO Input Optics subsystem as implemented for the sixth science run of the LIGO interferometers. The Initial LIGO Input Optics experienced thermal side effects when operating with 7 W input power. We designed, built, and implemented improved versions of the Input Optics for Enhanced LIGO, an incremental upgrade to the Initial LIGO interferometers, designed to run with 30 W input power. At four times the power of Initial LIGO, the Enhanced LIGO Input Optics demonstrated improved performance including better optical isolation, less thermal drift, minimal thermal lensing, and higher optical efficiency. The success of the Input Optics design fosters confidence for its ability to perform well in Advanced LIGO.

  6. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  7. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Performance enhancement of linear stirling cryocoolers

    NASA Astrophysics Data System (ADS)

    Korf, Herbert; Ruehlich, Ingo; Wiedmann, Th.

    2000-12-01

    Performance and reliability parameters of the AIM Stirling coolers have been presented in several previous publications. This paper focuses on recent developments at AIM for the COP improvement of cryocoolers in IR-detectors and systems applications. Improved COP of cryocoolers is a key for optimized form factors, weight and reliability. In addition, some systems are critical for minimum input power and consequently minimum electromagnetic interference or magnetic stray fields, heat sinking or minimum stress under high g-level, etc. Although performance parameters and loss mechanism are well understood and can be calculated precisely, several losses still had been excessive and needed to be minimized. The AIM program is based on the SADA I cryocooler, which now is optimized to carry 4.3 W net heat load at 77K. As this program will lead into applications on a space platform, in a next step AIM is introducing flexure bearings and in a final step, an advanced pulse tube cold head will be implemented. The performance of the SADA II cooler is also improved by using the same tools and methods than used for the performance increase of the SADA I cooler by a factor of two. The main features are summarized together with measured or calculated performance data.

  9. Stereopsis cueing effects on hover-in-turbulence performance in a simulated rotorcraft

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.

    1990-01-01

    The efficacy of stereopsis cueing in pictorial displays was assessed in a real-time piloted simulation experiment of a rotorcraft precision hover-in-turbulence task. Seven pilots endeavored to maintain a hover by visually aligning a set of inner and outer wickets (major elements of a real-world pictorial display, thus attaining the desired hover position, in a full factorial experimental design. The display conditions examined included the presence or absence of a velocity display element (a velocity head-up display) as well as the stereopsis cueing conditions, which included non-stereo (binoptic or monoscopic - no depth cues other than those provided by a perspective, real-world display), stereo 3-D, and hyper stereo (telestereoscopic). Subjective and objective results indicated that the depth cues provided by the stereo displays enhanced the situational awareness of the pilot and enabled improved hover performance to be achieved. The velocity display element also improved the hover performance, with the best hover performance being achieved with the combined use of stereo and the velocity display element. Pilot control input data revealed that less control action was required to attain the improved hover performance with the stereo displays.

  10. A rack-mounted precision waveguide-below-cutoff attenuator with an absolute electronic readout

    NASA Technical Reports Server (NTRS)

    Cook, C. C.

    1974-01-01

    A coaxial precision waveguide-below-cutoff attenuator is described which uses an absolute (unambiguous) electronic digital readout of displacement in inches in addition to the usual gear driven mechanical counter-dial readout in decibels. The attenuator is rack-mountable and has the input and output RF connectors in a fixed position. The attenuation rate for 55, 50, and 30 MHz operation is given along with a discussion of sources of errors. In addition, information is included to aid the user in making adjustments on the attenuator should it be damaged or disassembled for any reason.

  11. Detecting Nano-Scale Vibrations in Rotating Devices by Using Advanced Computational Methods

    PubMed Central

    del Toro, Raúl M.; Haber, Rodolfo E.; Schmittdiel, Michael C.

    2010-01-01

    This paper presents a computational method for detecting vibrations related to eccentricity in ultra precision rotation devices used for nano-scale manufacturing. The vibration is indirectly measured via a frequency domain analysis of the signal from a piezoelectric sensor attached to the stationary component of the rotating device. The algorithm searches for particular harmonic sequences associated with the eccentricity of the device rotation axis. The detected sequence is quantified and serves as input to a regression model that estimates the eccentricity. A case study presents the application of the computational algorithm during precision manufacturing processes. PMID:22399918

  12. Adaptive neuro-heuristic hybrid model for fruit peel defects detection.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2018-02-01

    Fusion of machine learning methods benefits in decision support systems. A composition of approaches gives a possibility to use the most efficient features composed into one solution. In this article we would like to present an approach to the development of adaptive method based on fusion of proposed novel neural architecture and heuristic search into one co-working solution. We propose a developed neural network architecture that adapts to processed input co-working with heuristic method used to precisely detect areas of interest. Input images are first decomposed into segments. This is to make processing easier, since in smaller images (decomposed segments) developed Adaptive Artificial Neural Network (AANN) processes less information what makes numerical calculations more precise. For each segment a descriptor vector is composed to be presented to the proposed AANN architecture. Evaluation is run adaptively, where the developed AANN adapts to inputs and their features by composed architecture. After evaluation, selected segments are forwarded to heuristic search, which detects areas of interest. As a result the system returns the image with pixels located over peel damages. Presented experimental research results on the developed solution are discussed and compared with other commonly used methods to validate the efficacy and the impact of the proposed fusion in the system structure and training process on classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity

    PubMed Central

    Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon

    2011-01-01

    Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  15. Electrical and Optical Activation of Mesoscale Neural Circuits with Implications for Coding.

    PubMed

    Millard, Daniel C; Whitmire, Clarissa J; Gollnick, Clare A; Rozell, Christopher J; Stanley, Garrett B

    2015-11-25

    Artificial activation of neural circuitry through electrical microstimulation and optogenetic techniques is important for both scientific discovery of circuit function and for engineered approaches to alleviate various disorders of the nervous system. However, evidence suggests that neural activity generated by artificial stimuli differs dramatically from normal circuit function, in terms of both the local neuronal population activity at the site of activation and the propagation to downstream brain structures. The precise nature of these differences and the implications for information processing remain unknown. Here, we used voltage-sensitive dye imaging of primary somatosensory cortex in the anesthetized rat in response to deflections of the facial vibrissae and electrical or optogenetic stimulation of thalamic neurons that project directly to the somatosensory cortex. Although the different inputs produced responses that were similar in terms of the average cortical activation, the variability of the cortical response was strikingly different for artificial versus sensory inputs. Furthermore, electrical microstimulation resulted in highly unnatural spatial activation of cortex, whereas optical input resulted in spatial cortical activation that was similar to that induced by sensory inputs. A thalamocortical network model suggested that observed differences could be explained by differences in the way in which artificial and natural inputs modulate the magnitude and synchrony of population activity. Finally, the variability structure in the response for each case strongly influenced the optimal inputs for driving the pathway from the perspective of an ideal observer of cortical activation when considered in the context of information transmission. Artificial activation of neural circuitry through electrical microstimulation and optogenetic techniques is important for both scientific discovery and clinical translation. However, neural activity generated by these artificial means differs dramatically from normal circuit function, both locally and in the propagation to downstream brain structures. The precise nature of these differences and the implications for information processing remain unknown. The significance of this work is in quantifying the differences, elucidating likely mechanisms underlying the differences, and determining the implications for information processing. Copyright © 2015 the authors 0270-6474/15/3515702-14$15.00/0.

  16. Multimodality Prediction of Chaotic Time Series with Sparse Hard-Cut EM Learning of the Gaussian Process Mixture Model

    NASA Astrophysics Data System (ADS)

    Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng

    2017-05-01

    The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.

  17. IsoDesign: a software for optimizing the design of 13C-metabolic flux analysis experiments.

    PubMed

    Millard, Pierre; Sokol, Serguei; Letisse, Fabien; Portais, Jean-Charles

    2014-01-01

    The growing demand for (13) C-metabolic flux analysis ((13) C-MFA) in the field of metabolic engineering and systems biology is driving the need to rationalize expensive and time-consuming (13) C-labeling experiments. Experimental design is a key step in improving both the number of fluxes that can be calculated from a set of isotopic data and the precision of flux values. We present IsoDesign, a software that enables these parameters to be maximized by optimizing the isotopic composition of the label input. It can be applied to (13) C-MFA investigations using a broad panel of analytical tools (MS, MS/MS, (1) H NMR, (13) C NMR, etc.) individually or in combination. It includes a visualization module to intuitively select the optimal label input depending on the biological question to be addressed. Applications of IsoDesign are described, with an example of the entire (13) C-MFA workflow from the experimental design to the flux map including important practical considerations. IsoDesign makes the experimental design of (13) C-MFA experiments more accessible to a wider biological community. IsoDesign is distributed under an open source license at http://metasys.insa-toulouse.fr/software/isodes/ © 2013 Wiley Periodicals, Inc.

  18. Design, Experiments and Simulation of Voltage Transformers on the Basis of a Differential Input D-dot Sensor

    PubMed Central

    Wang, Jingang; Gao, Can; Yang, Jie

    2014-01-01

    Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid. PMID:25036333

  19. Processing Pipeline of Sugarcane Spectral Response to Characterize the Fallen Plants Phenomenon

    NASA Astrophysics Data System (ADS)

    Solano, Agustín; Kemerer, Alejandra; Hadad, Alejandro

    2016-04-01

    Nowadays, in agronomic systems it is possible to make a variable management of inputs to improve the efficiency of agronomic industry and optimize the logistics of the harvesting process. In this way, it was proposed for sugarcane culture the use of remote sensing tools and computational methods to identify useful areas in the cultivated lands. The objective was to use these areas to make variable management of the crop. When at the moment of harvesting the sugarcane there are fallen stalks, together with them some strange material (vegetal or mineral) is collected. This strange material is not millable and when it enters onto the sugar mill it causes important looses of efficiency in the sugar extraction processes and affects its quality. Considering this issue, the spectral response of sugarcane plants in aerial multispectral images was studied. The spectral response was analyzed in different bands of the electromagnetic spectrum. Then, the aerial images were segmented to obtain homogeneous regions useful for producers to make decisions related to the use of inputs and resources according to the variability of the system (existence of fallen cane and standing cane). The obtained segmentation results were satisfactory. It was possible to identify regions with fallen cane and regions with standing cane with high precision rates.

  20. Validation of a method to measure the vector fidelity of triaxial vector sensors

    NASA Astrophysics Data System (ADS)

    De Freitas, J. M.

    2018-06-01

    A method to measure the misalignment angles and vector fidelity of a mutually orthogonal arrangement of triaxial accelerometers has been validated by introducing known misalignments into the measurement procedure. The method is based on the excitation of all three accelerometers in equal measure and the determination of the second order responsivity tensor as a metric. The sensor axis misalignment angles measured using a sensor rotation technique as a reference were 1.49°  ±  0.05°, 0.63°  ±  0.02°, and 0.78°  ±  0.04°. The resolution of the new approach against the reference was 0.03° with an accuracy of 0.2° and maximum deviation of 0.4°. An ellipticity tensor β that characterises the extent to which a triaxial system preserves the input polarisation state purity was introduced. In a careful laboratory arrangement, up to 98% input polarisation state purity was shown to be maintained. It is recommended that documentation on commercial and research grade high-precision triaxial sensor systems should give the responsivity matrix . This technique will improve the range of vector fidelity measurement tools for triaxial accelerometers and other vector sensors such as magnetometers, gyroscopes and acoustic vector sensors.

  1. Changes in the neural control of a complex motor sequence during learning

    PubMed Central

    Otchy, Timothy M.; Goldberg, Jesse H.; Aronov, Dmitriy; Fee, Michale S.

    2011-01-01

    The acquisition of complex motor sequences often proceeds through trial-and-error learning, requiring the deliberate exploration of motor actions and the concomitant evaluation of the resulting performance. Songbirds learn their song in this manner, producing highly variable vocalizations as juveniles. As the song improves, vocal variability is gradually reduced until it is all but eliminated in adult birds. In the present study we examine how the motor program underlying such a complex motor behavior evolves during learning by recording from the robust nucleus of the arcopallium (RA), a motor cortex analog brain region. In young birds, neurons in RA exhibited highly variable firing patterns that throughout development became more precise, sparse, and bursty. We further explored how the developing motor program in RA is shaped by its two main inputs: LMAN, the output nucleus of a basal ganglia-forebrain circuit, and HVC, a premotor nucleus. Pharmacological inactivation of LMAN during singing made the song-aligned firing patterns of RA neurons adultlike in their stereotypy without dramatically affecting the spike statistics or the overall firing patterns. Removing the input from HVC, on the other hand, resulted in a complete loss of stereotypy of both the song and the underlying motor program. Thus our results show that a basal ganglia-forebrain circuit drives motor exploration required for trial-and-error learning by adding variability to the developing motor program. As learning proceeds and the motor circuits mature, the relative contribution of LMAN is reduced, allowing the premotor input from HVC to drive an increasingly stereotyped song. PMID:21543758

  2. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  3. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  4. A self-adaption compensation control for hysteresis nonlinearity in piezo-actuated stages based on Pi-sigma fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Zhou, Miaolei

    2018-04-01

    Piezo-actuated stages are widely applied in the high-precision positioning field nowadays. However, the inherent hysteresis nonlinearity in piezo-actuated stages greatly deteriorates the positioning accuracy of piezo-actuated stages. This paper first utilizes a nonlinear autoregressive moving average with exogenous inputs (NARMAX) model based on the Pi-sigma fuzzy neural network (PSFNN) to construct an online rate-dependent hysteresis model for describing the hysteresis nonlinearity in piezo-actuated stages. In order to improve the convergence rate of PSFNN and modeling precision, we adopt the gradient descent algorithm featuring three different learning factors to update the model parameters. The convergence of the NARMAX model based on the PSFNN is analyzed effectively. To ensure that the parameters can converge to the true values, the persistent excitation condition is considered. Then, a self-adaption compensation controller is designed for eliminating the hysteresis nonlinearity in piezo-actuated stages. A merit of the proposed controller is that it can directly eliminate the complex hysteresis nonlinearity in piezo-actuated stages without any inverse dynamic models. To demonstrate the effectiveness of the proposed model and control methods, a set of comparative experiments are performed on piezo-actuated stages. Experimental results show that the proposed modeling and control methods have excellent performance.

  5. A novel methodology for non-linear system identification of battery cells used in non-road hybrid electric vehicles

    NASA Astrophysics Data System (ADS)

    Unger, Johannes; Hametner, Christoph; Jakubek, Stefan; Quasthoff, Marcus

    2014-12-01

    An accurate state of charge (SoC) estimation of a traction battery in hybrid electric non-road vehicles, which possess higher dynamics and power densities than on-road vehicles, requires a precise battery cell terminal voltage model. This paper presents a novel methodology for non-linear system identification of battery cells to obtain precise battery models. The methodology comprises the architecture of local model networks (LMN) and optimal model based design of experiments (DoE). Three main novelties are proposed: 1) Optimal model based DoE, which aims to high dynamically excite the battery cells at load ranges frequently used in operation. 2) The integration of corresponding inputs in the LMN to regard the non-linearities SoC, relaxation, hysteresis as well as temperature effects. 3) Enhancements to the local linear model tree (LOLIMOT) construction algorithm, to achieve a physical appropriate interpretation of the LMN. The framework is applicable for different battery cell chemistries and different temperatures, and is real time capable, which is shown on an industrial PC. The accuracy of the obtained non-linear battery model is demonstrated on cells with different chemistries and temperatures. The results show significant improvement due to optimal experiment design and integration of the battery non-linearities within the LMN structure.

  6. Leveraging Pattern Semantics for Extracting Entities in Enterprises

    PubMed Central

    Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei

    2015-01-01

    Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises (“Blue” refers to “Windows Blue”), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach. PMID:26705540

  7. Leveraging Pattern Semantics for Extracting Entities in Enterprises.

    PubMed

    Tao, Fangbo; Zhao, Bo; Fuxman, Ariel; Li, Yang; Han, Jiawei

    2015-05-01

    Entity Extraction is a process of identifying meaningful entities from text documents. In enterprises, extracting entities improves enterprise efficiency by facilitating numerous applications, including search, recommendation, etc. However, the problem is particularly challenging on enterprise domains due to several reasons. First, the lack of redundancy of enterprise entities makes previous web-based systems like NELL and OpenIE not effective, since using only high-precision/low-recall patterns like those systems would miss the majority of sparse enterprise entities, while using more low-precision patterns in sparse setting also introduces noise drastically. Second, semantic drift is common in enterprises ("Blue" refers to "Windows Blue"), such that public signals from the web cannot be directly applied on entities. Moreover, many internal entities never appear on the web. Sparse internal signals are the only source for discovering them. To address these challenges, we propose an end-to-end framework for extracting entities in enterprises, taking the input of enterprise corpus and limited seeds to generate a high-quality entity collection as output. We introduce the novel concept of Semantic Pattern Graph to leverage public signals to understand the underlying semantics of lexical patterns, reinforce pattern evaluation using mined semantics, and yield more accurate and complete entities. Experiments on Microsoft enterprise data show the effectiveness of our approach.

  8. Demonstration of a High-Order Mode Input Coupler for a 220-GHz Confocal Gyrotron Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Guan, Xiaotong; Fu, Wenjie; Yan, Yang

    2018-02-01

    A design of high-order mode input coupler for 220-GHz confocal gyrotron travelling wave tube is proposed, simulated, and demonstrated by experimental tests. This input coupler is designed to excite confocal TE 06 mode from rectangle waveguide TE 10 mode over a broadband frequency range. Simulation results predict that the optimized conversion loss is about 2.72 dB with a mode purity excess of 99%. Considering of the gyrotron interaction theory, an effective bandwidth of 5 GHz is obtained, in which the beam-wave coupling efficiency is higher than half of maximum. The field pattern under low power demonstrates that TE 06 mode is successfully excited in confocal waveguide at 220 GHz. Cold test results from the vector network analyzer perform good agreements with simulation results. Both simulation and experimental results illustrate that the reflection at input port S11 is sensitive to the perpendicular separation of two mirrors. It provides an engineering possibility for estimating the assembly precision.

  9. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition

    PubMed Central

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A.

    2016-01-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. PMID:26209846

  10. Computing and analyzing the sensitivity of MLP due to the errors of the i.i.d. inputs and weights based on CLT.

    PubMed

    Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy

    2010-12-01

    In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.

  11. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  12. Positional glow curve simulation for thermoluminescent detector (TLD) system design

    NASA Astrophysics Data System (ADS)

    Branch, C. J.; Kearfott, K. J.

    1999-02-01

    Multi- and thin element dosimeters, variable heating rate schemes, and glow-curve analysis have been employed to improve environmental and personnel dosimetry using thermoluminescent detectors (TLDs). Detailed analysis of the effects of errors and optimization of techniques would be highly desirable. However, an understanding of the relationship between TL light production, light attenuation, and precise heating schemes is made difficult because of experimental challenges involved in measuring positional TL light production and temperature variations as a function of time. This work reports the development of a general-purpose computer code, thermoluminescent detector simulator, TLD-SIM, to simulate the heating of any TLD type using a variety of conventional and experimental heating methods including pulsed focused or unfocused lasers with Gaussian or uniform cross sections, planchet, hot gas, hot finger, optical, infrared, or electrical heating. TLD-SIM has been used to study the impact on the TL light production of varying the input parameters which include: detector composition, heat capacity, heat conductivity, physical size, and density; trapped electron density, the frequency factor of oscillation of electrons in the traps, and trap-conduction band potential energy difference; heating scheme source terms and heat transfer boundary conditions; and TL light scatter and attenuation coefficients. Temperature profiles and glow curves as a function of position time, as well as the corresponding temporally and/or spatially integrated glow values, may be plotted while varying any of the input parameters. Examples illustrating TLD system functions, including glow curve variability, will be presented. The flexible capabilities of TLD-SIM promises to enable improved TLD system design.

  13. Feedback attitude sliding mode regulation control of spacecraft using arm motion

    NASA Astrophysics Data System (ADS)

    Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu

    2013-09-01

    The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.

  14. RZWQM predicted effects of soil N testing with incorporated automatic parameter optimization software (PEST) and weather input quality control

    USDA-ARS?s Scientific Manuscript database

    Among the most promising tools available for determining precise N requirements are soil mineral N tests. Field tests that evaluated this practice, however, have been conducted under only limited weather and soil conditions. Previous research has shown that using agricultural systems models such as ...

  15. Assimilating Leaf Area Index Estimates from Remote Sensing into the Simulations of a Cropping Systems Model

    USDA-ARS?s Scientific Manuscript database

    Spatial extrapolation of cropping systems models for regional crop growth and water use assessment and farm-level precision management has been limited by the vast model input requirements and the model sensitivity to parameter uncertainty. Remote sensing has been proposed as a viable source of spat...

  16. Cognitive Control of Saccadic Eye Movements

    ERIC Educational Resources Information Center

    Hutton, S. B.

    2008-01-01

    The saccadic eye movement system provides researchers with a powerful tool with which to explore the cognitive control of behaviour. It is a behavioural system whose limited output can be measured with exceptional precision, and whose input can be controlled and manipulated in subtle ways. A range of cognitive processes (notably those involved in…

  17. Impact of DEM and soils on topographic index, as used in TopoSWAT

    USDA-ARS?s Scientific Manuscript database

    A topographic index (TI), comprised of slope and upstream contributing area, is used in TopoSWAT to help account for variable source runoff and soil moisture. The level of precision in the GIS input data layers can substantially impact the calculations of the topographic index layer and affect the a...

  18. Automatic sweep circuit

    DOEpatents

    Keefe, Donald J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  19. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  20. Computational simulation of weld microstructure and distortion by considering process mechanics

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.

    2009-05-01

    Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.

  1. Method and system for providing precise multi-function modulation

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz (Inventor); Sumida, Joe T. (Inventor)

    1989-01-01

    A method and system is disclosed which provides precise multi-function digitally implementable modulation for a communication system. The invention provides a modulation signal for a communication system in response to an input signal from a data source. A digitized time response is generated from samples of a time domain representation of a spectrum profile of a selected modulation scheme. The invention generates and stores coefficients for each input symbol in accordance with the selected modulation scheme. The output signal is provided by a plurality of samples, each sample being generated by summing the products of a predetermined number of the coefficients and a predetermined number of the samples of the digitized time response. In a specific illustrative implementation, the samples of the output signals are converted to analog signals, filtered and used to modulate a carrier in a conventional manner. The invention is versatile in that it allows for the storage of the digitized time responses and corresponding coefficient lookup table of a number of modulation schemes, any of which may then be selected for use in accordance with the teachings of the invention.

  2. Intelligence rules of hysteresis in the feedforward trajectory control of piezoelectrically-driven nanostagers

    NASA Astrophysics Data System (ADS)

    Bashash, Saeid; Jalili, Nader

    2007-02-01

    Piezoelectrically-driven nanostagers have limited performance in a variety of feedforward and feedback positioning applications because of their nonlinear hysteretic response to input voltage. The hysteresis phenomenon is well known for its complex and multi-path behavior. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligence properties of hysteresis with the effects of non-local memories are discussed here. Through performing a set of experiments on a piezoelectrically-driven nanostager with a high resolution capacitive position sensor, it is shown that for the precise prediction of the hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of the hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the ever-present nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect, if memory units are sufficiently chosen for the inverse model.

  3. Cloud Absorption Radiometer Autonomous Navigation System - CANS

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Gatebe, Charles; McCune, Bill; Hellwig, Dustan

    2013-01-01

    CAR (cloud absorption radiometer) acquires spatial reference data from host aircraft navigation systems. This poses various problems during CAR data reduction, including navigation data format, accuracy of position data, accuracy of airframe inertial data, and navigation data rate. Incorporating its own navigation system, which included GPS (Global Positioning System), roll axis inertia and rates, and three axis acceleration, CANS expedites data reduction and increases the accuracy of the CAR end data product. CANS provides a self-contained navigation system for the CAR, using inertial reference and GPS positional information. The intent of the software application was to correct the sensor with respect to aircraft roll in real time based upon inputs from a precision navigation sensor. In addition, the navigation information (including GPS position), attitude data, and sensor position details are all streamed to a remote system for recording and later analysis. CANS comprises a commercially available inertial navigation system with integral GPS capability (Attitude Heading Reference System AHRS) integrated into the CAR support structure and data system. The unit is attached to the bottom of the tripod support structure. The related GPS antenna is located on the P-3 radome immediately above the CAR. The AHRS unit provides a RS-232 data stream containing global position and inertial attitude and velocity data to the CAR, which is recorded concurrently with the CAR data. This independence from aircraft navigation input provides for position and inertial state data that accounts for very small changes in aircraft attitude and position, sensed at the CAR location as opposed to aircraft state sensors typically installed close to the aircraft center of gravity. More accurate positional data enables quicker CAR data reduction with better resolution. The CANS software operates in two modes: initialization/calibration and operational. In the initialization/calibration mode, the software aligns the precision navigation sensors and initializes the communications interfaces with the sensor and the remote computing system. It also monitors the navigation data state for quality and ensures that the system maintains the required fidelity for attitude and positional information. In the operational mode, the software runs at 12.5 Hz and gathers the required navigation/attitude data, computes the required sensor correction values, and then commands the sensor to the required roll correction. In this manner, the sensor will stay very near to vertical at all times, greatly improving the resulting collected data and imagery. CANS greatly improves quality of resulting imagery and data collected. In addition, the software component of the system outputs a concisely formatted, high-speed data stream that can be used for further science data processing. This precision, time-stamped data also can benefit other instruments on the same aircraft platform by providing extra information from the mission flight.

  4. Ecological intensification of cereal production systems: yield potential, soil quality, and precision agriculture.

    PubMed

    Cassman, K G

    1999-05-25

    Wheat (Triticum aestivum L.), rice (Oryza sativa L.), and maize (Zea mays L.) provide about two-thirds of all energy in human diets, and four major cropping systems in which these cereals are grown represent the foundation of human food supply. Yield per unit time and land has increased markedly during the past 30 years in these systems, a result of intensified crop management involving improved germplasm, greater inputs of fertilizer, production of two or more crops per year on the same piece of land, and irrigation. Meeting future food demand while minimizing expansion of cultivated area primarily will depend on continued intensification of these same four systems. The manner in which further intensification is achieved, however, will differ markedly from the past because the exploitable gap between average farm yields and genetic yield potential is closing. At present, the rate of increase in yield potential is much less than the expected increase in demand. Hence, average farm yields must reach 70-80% of the yield potential ceiling within 30 years in each of these major cereal systems. Achieving consistent production at these high levels without causing environmental damage requires improvements in soil quality and precise management of all production factors in time and space. The scope of the scientific challenge related to these objectives is discussed. It is concluded that major scientific breakthroughs must occur in basic plant physiology, ecophysiology, agroecology, and soil science to achieve the ecological intensification that is needed to meet the expected increase in food demand.

  5. Unexpected arousal modulates the influence of sensory noise on confidence

    PubMed Central

    Allen, Micah; Frank, Darya; Schwarzkopf, D Samuel; Fardo, Francesca; Winston, Joel S; Hauser, Tobias U; Rees, Geraint

    2016-01-01

    Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. DOI: http://dx.doi.org/10.7554/eLife.18103.001 PMID:27776633

  6. Control and acquisition systems for new scanning transmission x-ray microscopes at Advanced Light Source (abstract)

    NASA Astrophysics Data System (ADS)

    Tyliszczak, T.; Hitchcock, P.; Kilcoyne, A. L. D.; Ade, H.; Hitchcock, A. P.; Fakra, S.; Steele, W. F.; Warwick, T.

    2002-03-01

    Two new scanning x-ray transmission microscopes are being built at beamline 5.3.2 and beamline 7.0 of the Advanced Light Source that have novel aspects in their control and acquisition systems. Both microscopes use multiaxis laser interferometry to improve the precision of pixel location during imaging and energy scans as well as to remove image distortions. Beam line 5.3.2 is a new beam line where the new microscope will be dedicated to studies of polymers in the 250-600 eV energy range. Since this is a bending magnet beam line with lower x-ray brightness than undulator beam lines, special attention is given to the design not only to minimize distortions and vibrations but also to optimize the controls and acquisition to improve data collection efficiency. 5.3.2 microscope control and acquisition is based on a PC computer running WINDOWS 2000. All mechanical stages are moved by stepper motors with rack mounted controllers. A dedicated counter board is used for counting and timing and a multi-input/output board is used for analog acquisition and control of the focusing mirror. A three axis differential laser interferometer is being used to improve stability and precision by careful tracking of the relative positions of the sample and zone plate. Each axis measures the relative distance between a mirror placed on the sample stage and a mirror attached to the zone plate holder. Agilent Technologies HP 10889A servo-axis interferometer boards are used. While they were designed to control servo motors, our tests show that they can be used to directly control the piezo stage. The use of the interferometer servo-axis boards provides excellent point stability for spectral measurements. The interferometric feedback also provides active vibration isolation which reduces deleterious impact of mechanical vibrations up to 20-30 Hz. It also can improve the speed and precision of image scans. Custom C++ software has been written to provide user friendly control of the microscope and integration with visual light microscopy indexing of the samples. The beam line 7.0 microscope upgrade is a new design which will replace the existing microscope. The design is similar to that of beam line 5.3.2, including interferometric position encoding. However the acquisition and control is based on VXI systems, a Sun computer, and LABVIEW™ software. The main objective of the BL 7.0 microscope upgrade is to achieve precise image scans at very high speed (pixel dwells as short as 10 μs) to take full advantage of the high brightness of the 7.0 undulator beamline. Results of tests and a discussion of the benefits of our scanning microscope designs will be presented.

  7. Limits on the prediction of helicopter rotor noise using thickness and loading sources: Validation of helicopter noise prediction techniques

    NASA Technical Reports Server (NTRS)

    Succi, G. P.

    1983-01-01

    The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.

  8. Change of reference frame for tactile localization during child development.

    PubMed

    Pagel, Birthe; Heed, Tobias; Röder, Brigitte

    2009-11-01

    Temporal order judgements (TOJ) for two tactile stimuli, one presented to the left and one to the right hand, are less precise when the hands are crossed over the midline than when the hands are uncrossed. This 'crossed hand' effect has been considered as evidence for a remapping of tactile input into an external reference frame. Since late, but not early, blind individuals show such remapping, it has been hypothesized that the use of an external reference frame develops during childhood. Five- to 10-year-old children were therefore tested with the tactile TOJ task, both with uncrossed and crossed hands. Overall performance in the TOJ task improved with age. While children older than 5 1/2 years displayed a crossed hand effect, younger children did not. Therefore the use of an external reference frame for tactile, and possibly multisensory, localization seems to be acquired at age 5.

  9. Cognitive diagnosis modelling incorporating item response times.

    PubMed

    Zhan, Peida; Jiao, Hong; Liao, Dandan

    2018-05-01

    To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.

  10. Developing an acute-physical-examination template for a Tegional EHR system aimed at improving inexperienced physician's documentation.

    PubMed

    Lilholt, Lars; Haubro, Camilla Dremstrup; Møller, Jørn Munkhof; Aarøe, Jens; Højen, Anne Randorff; Gøeg, Kirstine Rosenbeck

    2013-01-01

    It is well-established that to increase acceptance of electronic clinical documentation tools, such as electronic health record (EHR) systems, it is important to have a strong relationship between those who document the clinical encounters and those who reaps the benefit of digitalized and more structured documentation. [1] Therefore, templates for EHR systems benefit from being closely related to clinical practice with a strong focus on primarily solving clinical problems. Clinical use as a driver for structured documentation has been the focus of the acute-physical-examination template (APET) development in the North Denmark Region. The template was developed through a participatory design where precision and clarity of documentation was prioritized as well as fast registration. The resulting template has approximately 700 easy accessible input possibilities and will be evaluated in clinical practice in the first quarter of 2013.

  11. Enhanced viscous flow drag reduction using acoustic excitation

    NASA Technical Reports Server (NTRS)

    Nagel, Robert T.

    1987-01-01

    Proper acoustic excitation of a single large-eddy break-up device can increase the resulting drag reduction and, after approximately 40 to 50 delta downstream, provide net drag reduction. Precise optimization of the input time delay, amplitude and response threshold is difficult but possible to achieve. Drag reduction is improved with optimized conditions. The possibility of optimized processing strongly suggests a mechanism which involves interaction of the acoustic waves and large eddies at the trailing edge of the large eddy break-up device. Although the mechanism for spreading of this phenomenon is unknown, it is apparent that the drag reduction effect does tend to spread spanwise as the flow convects downstream. The phenomenon is not unique to a particular blade configuration or flow velocity, although all data have been obtained at relatively low Reynolds numbers. The general repeatibility of the results for small configuration changes serves as verification of the phenomenon.

  12. Health impact assessment of climate change in Bangladesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Deborah Imel

    2003-05-01

    Global climate change (GCC) may have serious and irreversible impacts. Improved methods are needed to predict and quantify health impacts, so that appropriate risk management strategies can be focused on vulnerable areas. The disability-adjusted life year (DALY) is proposed as an effective tool in environmental health impact assessment (HIA). The DALY accounts for years of life lost to premature death and/or morbidity. Both the DALY and the determinants-of-health approach are applied to HIA of GCC in Bangladesh. Based on historical data, a major storm event may result in approximately 290 DALY per 1000 population, including both deaths and injuries, comparedmore » to a current all-cause rate of about 280 per 1000 in the region. A more precise result would require a large input of data; however, this level of analysis may be sufficient to rank risks, and to motivate and target risk management efforts.« less

  13. Research on flight stability performance of rotor aircraft based on visual servo control method

    NASA Astrophysics Data System (ADS)

    Yu, Yanan; Chen, Jing

    2016-11-01

    control method based on visual servo feedback is proposed, which is used to improve the attitude of a quad-rotor aircraft and to enhance its flight stability. Ground target images are obtained by a visual platform fixed on aircraft. Scale invariant feature transform (SIFT) algorism is used to extract image feature information. According to the image characteristic analysis, fast motion estimation is completed and used as an input signal of PID flight control system to realize real-time status adjustment in flight process. Imaging tests and simulation results show that the method proposed acts good performance in terms of flight stability compensation and attitude adjustment. The response speed and control precision meets the requirements of actual use, which is able to reduce or even eliminate the influence of environmental disturbance. So the method proposed has certain research value to solve the problem of aircraft's anti-disturbance.

  14. Evaluation of Laser Braze-welded Dissimilar Al-Cu Joints

    NASA Astrophysics Data System (ADS)

    Schmalen, Pascal; Plapper, Peter

    The thermal joining of Aluminum and Copper is a promising technology towards automotive battery manufacturing. The dissimilar metals Al-Cu are difficult to weld due to their different physicochemical characteristics and the formation of intermetallic compounds (IMC), which have reduced mechanical and electric properties. There is a critical thickness of the IMCs where the favored mechanical properties of the base material can be preserved. The laser braze welding principle uses a position and power oscillated laser-beam to reduce the energy input and the intermixture of both materials and therefore achieves minimized IMCs thickness. The evaluation of the weld seam is important to improve the joint performance and enhance the welding process. This paper is focused on the characterization and quantification of the IMCs. Mechanical, electrical and metallurgical methods are presented and performed on Al1050 and SF-Cu joints and precise weld criteria are developed.

  15. Design of an ultra low power third order continuous time current mode ΣΔ modulator for WLAN applications.

    PubMed

    Behzadi, Kobra; Baghelani, Masoud

    2014-05-01

    This paper presents a third order continuous time current mode ΣΔ modulator for WLAN 802.11b standard applications. The proposed circuit utilized feedback architecture with scaled and optimized DAC coefficients. At circuit level, we propose a modified cascade current mirror integrator with reduced input impedance which results in more bandwidth and linearity and hence improves the dynamic range. Also, a very fast and precise novel dynamic latch based current comparator is introduced with low power consumption. This ultra fast comparator facilitates increasing the sampling rate toward GHz frequencies. The modulator exhibits dynamic range of more than 60 dB for 20 MHz signal bandwidth and OSR of 10 while consuming only 914 μW from 1.8 V power supply. The FoM of the modulator is calculated from two different methods, and excellent performance is achieved for proposed modulator.

  16. Design of an ultra low power third order continuous time current mode ΣΔ modulator for WLAN applications

    PubMed Central

    Behzadi, Kobra; Baghelani, Masoud

    2013-01-01

    This paper presents a third order continuous time current mode ΣΔ modulator for WLAN 802.11b standard applications. The proposed circuit utilized feedback architecture with scaled and optimized DAC coefficients. At circuit level, we propose a modified cascade current mirror integrator with reduced input impedance which results in more bandwidth and linearity and hence improves the dynamic range. Also, a very fast and precise novel dynamic latch based current comparator is introduced with low power consumption. This ultra fast comparator facilitates increasing the sampling rate toward GHz frequencies. The modulator exhibits dynamic range of more than 60 dB for 20 MHz signal bandwidth and OSR of 10 while consuming only 914 μW from 1.8 V power supply. The FoM of the modulator is calculated from two different methods, and excellent performance is achieved for proposed modulator. PMID:25685504

  17. Sub-Shot-Noise Transmission Measurement Enabled by Active Feed-Forward of Heralded Single Photons

    NASA Astrophysics Data System (ADS)

    Sabines-Chesterking, J.; Whittaker, R.; Joshi, S. K.; Birchall, P. M.; Moreau, P. A.; McMillan, A.; Cable, H. V.; O'Brien, J. L.; Rarity, J. G.; Matthews, J. C. F.

    2017-07-01

    Harnessing the unique properties of quantum mechanics offers the possibility of delivering alternative technologies that can fundamentally outperform their classical counterparts. These technologies deliver advantages only when components operate with performance beyond specific thresholds. For optical quantum metrology, the biggest challenge that impacts on performance thresholds is optical loss. Here, we demonstrate how including an optical delay and an optical switch in a feed-forward configuration with a stable and efficient correlated photon-pair source reduces the detector efficiency required to enable quantum-enhanced sensing down to the detection level of single photons and without postselection. When the switch is active, we observe a factor of improvement in precision of 1.27 for transmission measurement on a per-input-photon basis compared to the performance of a laser emitting an ideal coherent state and measured with the same detection efficiency as our setup. When the switch is inoperative, we observe no quantum advantage.

  18. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    NASA Astrophysics Data System (ADS)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.

  19. More Gamma More Predictions: Gamma-Synchronization as a Key Mechanism for Efficient Integration of Classical Receptive Field Inputs with Surround Predictions

    PubMed Central

    Vinck, Martin; Bosman, Conrado A.

    2016-01-01

    During visual stimulation, neurons in visual cortex often exhibit rhythmic and synchronous firing in the gamma-frequency (30–90 Hz) band. Whether this phenomenon plays a functional role during visual processing is not fully clear and remains heavily debated. In this article, we explore the function of gamma-synchronization in the context of predictive and efficient coding theories. These theories hold that sensory neurons utilize the statistical regularities in the natural world in order to improve the efficiency of the neural code, and to optimize the inference of the stimulus causes of the sensory data. In visual cortex, this relies on the integration of classical receptive field (CRF) data with predictions from the surround. Here we outline two main hypotheses about gamma-synchronization in visual cortex. First, we hypothesize that the precision of gamma-synchronization reflects the extent to which CRF data can be accurately predicted by the surround. Second, we hypothesize that different cortical columns synchronize to the extent that they accurately predict each other’s CRF visual input. We argue that these two hypotheses can account for a large number of empirical observations made on the stimulus dependencies of gamma-synchronization. Furthermore, we show that they are consistent with the known laminar dependencies of gamma-synchronization and the spatial profile of intercolumnar gamma-synchronization, as well as the dependence of gamma-synchronization on experience and development. Based on our two main hypotheses, we outline two additional hypotheses. First, we hypothesize that the precision of gamma-synchronization shows, in general, a negative dependence on RF size. In support, we review evidence showing that gamma-synchronization decreases in strength along the visual hierarchy, and tends to be more prominent in species with small V1 RFs. Second, we hypothesize that gamma-synchronized network dynamics facilitate the emergence of spiking output that is particularly information-rich and sparse. PMID:27199684

  20. Using input feature information to improve ultraviolet retrieval in neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina

    2017-09-01

    In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.

  1. Using machine-learning methods to analyze economic loss function of quality management processes

    NASA Astrophysics Data System (ADS)

    Dzedik, V. A.; Lontsikh, P. A.

    2018-05-01

    During analysis of quality management systems, their economic component is often analyzed insufficiently. To overcome this issue, it is necessary to withdraw the concept of economic loss functions from tolerance thinking and address it. Input data about economic losses in processes have a complex form, thus, using standard tools to solve this problem is complicated. Use of machine learning techniques allows one to obtain precise models of the economic loss function based on even the most complex input data. Results of such analysis contain data about the true efficiency of a process and can be used to make investment decisions.

  2. Evaluation of pulsed streamer corona experiments to determine the O* radical yield

    NASA Astrophysics Data System (ADS)

    van Heesch, E. J. M.; Winands, G. J. J.; Pemen, A. J. M.

    2008-12-01

    The production of O* radicals in air by a pulsed streamer plasma is studied by integration of a large set of precise experimental data and the chemical kinetics of ozone production. The measured data comprise ozone production, plasma energy, streamer volume, streamer length, streamer velocity, humidity and gas-flow rate. Instead of entering input parameters into a kinetic model to calculate the end products the opposite strategy is followed. Since the amount of end-products (ozone) is known from the measurements the model had to be applied in the reverse direction to determine the input parameters, i.e. the O* radical concentration.

  3. Transform methods for precision continuum and control models of flexible space structures

    NASA Technical Reports Server (NTRS)

    Lupi, Victor D.; Turner, James D.; Chun, Hon M.

    1991-01-01

    An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.

  4. Precise GPS ephemerides from DMA and NGS tested by time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine

    1992-01-01

    It was shown that the use of the Defense Mapping Agency's (DMA) precise ephemerides brings a significant improvement to the accuracy of GPS time transfer. At present a new set of precise ephemerides produced by the National Geodetic Survey (NGS) has been made available to the timing community. This study demonstrates that both types of precise ephemerides improve long-distance GPS time transfer and remove the effects of Selective Availability (SA) degradation of broadcast ephemerides. The issue of overcoming SA is also discussed in terms of the routine availability of precise ephemerides.

  5. Joint Logistics Commanders Precision Optics Study

    DTIC Science & Technology

    1987-06-19

    DOCUMENTATION PAGE 11IFORE couri- EIING FORM YI T I (-’a 1.61100~) TYEor arFPO. a PERIOD t’V.Fi.(( JoI-rT PRECISION OPTICS TECHINIC.AL GROUP FINHAL I PRVOr kG...34*m rtlds ortw d ta.4 i r’f. Wu Uuuai mkidu*~s @9 o pndct praade by Ow MfsKM 9W fidsder ~Ye bevwil Af cnot ei ’ for ,tWta &.Vtionitt it ts" diviate wi...Canada Austria Belgium i):E ( upec If, Price Quald ’ILy Input coz:b: capital other (specify) DelIvery (lead-time) Foo1 uc rv Ice Resea: ’, capatiliity Csto

  6. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  7. Pb and Sr isotope measurements by inductively coupled plasma mass spectrometer: efficient time management for precision improvement

    NASA Astrophysics Data System (ADS)

    Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.

    1998-08-01

    One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.

  8. INDES User's guide multistep input design with nonlinear rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.

  9. Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, J E; Fratoni, M; Kramer, K J

    A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less

  10. Limitations of JEDI Models | Jobs and Economic Development Impact Models |

    Science.gov Websites

    precise forecast. The Jobs and Economic Development Impact (JEDI) models are input-output based models for assessing economic impacts and jobs, including JEDI (see Chapter 5, pp. 136-142). The most not reflect many other economic impacts that could affect real-world impacts on jobs from the project

  11. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a largemore » number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss their uncertainties and the precision of the ridge-to-mode correction schemes, through a detailed assessment of the sensitivity of the model to its input set. The precision of the ridge-to-mode correction is indicative of any possible residual systematic biases in the inferred mode characteristics. In our conclusions, we address how to further improve these estimates, and the implications for other data sets, like GONG+ and HMI.« less

  12. Effects of azimuth-symmetric acceptance cutoffs on the measured asymmetry in unpolarized Drell-Yan fixed-target experiments

    NASA Astrophysics Data System (ADS)

    Bianconi, A.; Bussa, M. P.; Destefanis, M.; Ferrero, L.; Greco, M.; Maggiora, M.; Spataro, S.

    2013-04-01

    Fixed-target unpolarized Drell-Yan experiments often feature an acceptance depending on the polar angle of the lepton tracks in the laboratory frame. Typically leptons are detected in a defined angular range, with a dead zone in the forward region. If the cutoffs imposed by the angular acceptance are independent of the azimuth, at first sight they do not appear dangerous for a measurement of the cos(2 φ) asymmetry, which is relevant because of its association with the violation of the Lam-Tung rule and with the Boer-Mulders function. On the contrary, direct simulations show that up to 10 percent asymmetries are produced by these cutoffs. These artificial asymmetries present qualitative features that allow them to mimic the physical ones. They introduce some model dependence in the measurements of the cos(2 φ) asymmetry, since a precise reconstruction of the acceptance in the Collins-Soper frame requires a Monte Carlo simulation, that in turn requires some detailed physical input to generate event distributions. Although experiments in the eighties seem to have been aware of this problem, the possibility of using the Boer-Mulders function as an input parameter in the extraction of transversity has much increased the requirements of precision on this measurement. Our simulations show that the safest approach to these measurements is a strong cutoff on the Collins-Soper polar angle. This reduces statistics, but does not necessarily decrease the precision in a measurement of the Boer-Mulders function.

  13. Theta frequency background tunes transmission but not summation of spiking responses.

    PubMed

    Parameshwaran, Dhanya; Bhalla, Upinder S

    2013-01-01

    Hippocampal neurons are known to fire as a function of frequency and phase of spontaneous network rhythms, associated with the animal's behaviour. This dependence is believed to give rise to precise rate and temporal codes. However, it is not well understood how these periodic membrane potential fluctuations affect the integration of synaptic inputs. Here we used sinusoidal current injection to the soma of CA1 pyramidal neurons in the rat brain slice to simulate background oscillations in the physiologically relevant theta and gamma frequency range. We used a detailed compartmental model to show that somatic current injection gave comparable results to more physiological synaptically driven theta rhythms incorporating excitatory input in the dendrites, and inhibitory input near the soma. We systematically varied the phase of synaptic inputs with respect to this background, and recorded changes in response and summation properties of CA1 neurons using whole-cell patch recordings. The response of the cell was dependent on both the phase of synaptic inputs and frequency of the background input. The probability of the cell spiking for a given synaptic input was up to 40% greater during the depolarized phases between 30-135 degrees of theta frequency current injection. Summation gain on the other hand, was not affected either by the background frequency or the phasic afferent inputs. This flat summation gain, coupled with the enhanced spiking probability during depolarized phases of the theta cycle, resulted in enhanced transmission of summed inputs during the same phase window of 30-135 degrees. Overall, our study suggests that although oscillations provide windows of opportunity to selectively boost transmission and EPSP size, summation of synaptic inputs remains unaffected during membrane oscillations.

  14. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  15. Emergence of binocular functional properties in a monocular neural circuit

    PubMed Central

    Ramdya, Pavan; Engert, Florian

    2010-01-01

    Sensory circuits frequently integrate converging inputs while maintaining precise functional relationships between them. For example, in mammals with stereopsis, neurons at the first stages of binocular visual processing show a close alignment of receptive-field properties for each eye. Still, basic questions about the global wiring mechanisms that enable this functional alignment remain unanswered, including whether the addition of a second retinal input to an otherwise monocular neural circuit is sufficient for the emergence of these binocular properties. We addressed this question by inducing a de novo binocular retinal projection to the larval zebrafish optic tectum and examining recipient neuronal populations using in vivo two-photon calcium imaging. Notably, neurons in rewired tecta were predominantly binocular and showed matching direction selectivity for each eye. We found that a model based on local inhibitory circuitry that computes direction selectivity using the topographic structure of both retinal inputs can account for the emergence of this binocular feature. PMID:19160507

  16. Synthetic Biology Platform for Sensing and Integrating Endogenous Transcriptional Inputs in Mammalian Cells.

    PubMed

    Angelici, Bartolomeo; Mailand, Erik; Haefliger, Benjamin; Benenson, Yaakov

    2016-08-30

    One of the goals of synthetic biology is to develop programmable artificial gene networks that can transduce multiple endogenous molecular cues to precisely control cell behavior. Realizing this vision requires interfacing natural molecular inputs with synthetic components that generate functional molecular outputs. Interfacing synthetic circuits with endogenous mammalian transcription factors has been particularly difficult. Here, we describe a systematic approach that enables integration and transduction of multiple mammalian transcription factor inputs by a synthetic network. The approach is facilitated by a proportional amplifier sensor based on synergistic positive autoregulation. The circuits efficiently transduce endogenous transcription factor levels into RNAi, transcriptional transactivation, and site-specific recombination. They also enable AND logic between pairs of arbitrary transcription factors. The results establish a framework for developing synthetic gene networks that interface with cellular processes through transcriptional regulators. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Environmental impacts of precision feeding programs applied in pig production.

    PubMed

    Andretta, I; Hauschild, L; Kipper, M; Pires, P G S; Pomar, C

    2017-12-04

    This study was undertaken to evaluate the effect that switching from conventional to precision feeding systems during the growing-finishing phase would have on the potential environmental impact of Brazilian pig production. Standard life-cycle assessment procedures were used, with a cradle-to-farm gate boundary. The inputs and outputs of each interface of the life cycle (production of feed ingredients, processing in the feed industry, transportation and animal rearing) were organized in a model. Grain production was independently characterized in the Central-West and South regions of Brazil, whereas the pigs were raised in the South region. Three feeding programs were applied for growing-finishing pigs: conventional phase feeding by group (CON); precision daily feeding by group (PFG) (whole herd fed the same daily adjusted diet); and precision daily feeding by individual (PFI) (diets adjusted daily to match individual nutrient requirements). Raising pigs (1 t pig BW at farm gate) in South Brazil under the CON feeding program using grain cultivated in the same region led to emissions of 1840 kg of CO2-eq, 13.1 kg of PO4-eq and 32.2 kg of SO2-eq. Simulations using grain from the Central-West region showed a greater climate change impact. Compared with the previous scenario, a 17% increase in climate change impact was found when simulating with soybeans produced in Central-West Brazil, whereas a 28% increase was observed when simulating with corn and soybeans from Central-West Brazil. Compared with the CON feeding program, the PFG and PFI programs reduced the potential environmental impact. Applying the PFG program mitigated the potential climate change impact and eutrophication by up to 4%, and acidification impact by up to 3% compared with the CON program. Making a further adjustment by feeding pigs according to their individual nutrient requirements mitigated the potential climate change impact by up to 6% and the potential eutrophication and acidification impact by up to 5% compared with the CON program. The greatest environmental gains associated with the adoption of precision feeding were observed when the diet combined soybeans from Central-West Brazil with corn produced in Southern Brazil. The results clearly show that precision feeding is an effective approach for improving the environmental sustainability of Brazilian pig production.

  18. Spectroscopic Factors From the Single Neutron Pickup Reaction ^64Zn(d,t)

    NASA Astrophysics Data System (ADS)

    Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Wirth, H.-F.; Herten-Berger, R.

    2008-10-01

    A great deal of attention has recently been paid towards high precision superallowed β-decay Ft values. With the availability of extremely high precision (<0.1%) experimental data, the precision on Ft is now limited by the ˜1% theoretical corrections.ootnotetextI.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008). This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking correction calculations become more difficult due to the truncated model space. Experimental data is needed to help constrain input parameters for these calculations, and thus experimental spectroscopic factors for these nuclei are important. Preliminary results from the single-nucleon-transfer reaction ^64Zn(d,t)^63Zn will be presented, and the implications for calculations of isospin-symmetry breaking in the superallowed &+circ; decay of ^62Ga will be discussed.

  19. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  1. Probing the limits to positional information

    PubMed Central

    Gregor, Thomas; Tank, David W.; Wieschaus, Eric F.; Bialek, William

    2008-01-01

    The reproducibility and precision of biological patterning is limited by the accuracy with which concentration profiles of morphogen molecules can be established and read out by their targets. We consider four measures of precision for the Bicoid morphogen in the Drosophila embryo: The concentration differences that distinguish neighboring cells, the limits set by the random arrival of Bicoid molecules at their targets (which depends on absolute concentration), the noise in readout of Bicoid by the activation of Hunchback, and the reproducibility of Bicoid concentration at corresponding positions in multiple embryos. We show, through a combination of different experiments, that all of these quantities are ~10%. This agreement among different measures of accuracy indicates that the embryo is not faced with noisy input signals and readout mechanisms; rather the system exerts precise control over absolute concentrations and responds reliably to small concentration differences, approaching the limits set by basic physical principles. PMID:17632062

  2. Phenomenological study of the interplay between IR-improved DGLAP-CS theory and the precision of an NLO ME matched parton shower MC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majhi, S.K., E-mail: tpskm@iacs.res.in; Mukhopadhyay, A., E-mail: aditi_mukhopadhyay@baylor.edu; Ward, B.F.L., E-mail: bfl_ward@baylor.edu

    2014-11-15

    We present a phenomenological study of the current status of the application of our approach of exact amplitude-based resummation in quantum field theory to precision QCD calculations, by realistic MC event generator methods, as needed for precision LHC physics. We discuss recent results as they relate to the interplay of the attendant IR-improved DGLAP-CS theory of one of us and the precision of exact NLO matrix-element matched parton shower MC’s in the Herwig6.5 environment as determined by comparison to recent LHC experimental observations on single heavy gauge boson production and decay. The level of agreement between the new theory andmore » the data continues to be a reason for optimism. In the spirit of completeness, we discuss as well other approaches to the same theoretical predictions that we make here from the standpoint of physical precision with an eye toward the (sub-)1% QCD⊗EW total theoretical precision regime for LHC physics. - Highlights: • Using LHC data, we show that IR-improved DGLAP-CS kernels with exact NLO Shower/ME matching improves MC precision. • We discuss other possible approaches in comparison with ours. • We propose experimental tests to discriminate between competing approaches.« less

  3. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  4. Is it working? A look at the changing nutrient practices in Oregon's Southern Willamette Valley Groundwater Management Area

    NASA Astrophysics Data System (ADS)

    Pearlstein, S.; Compton, J.; Eldridge, A.; Henning, A.; Selker, J. S.; Brooks, J. R.; Schmitz, D.

    2016-12-01

    Groundwater nitrate contamination affects thousands of households in the southern Willamette Valley and many more across the Pacific Northwest. The southern Willamette Valley Groundwater Management Area (SWV GWMA) was established in 2004 due to nitrate levels in the groundwater exceeding the human health standard of 10 mg nitrate-N L-1. Much of the nitrogen inputs to the GWMA comes from agricultural nitrogen use, and thus efforts to reduce N inputs to groundwater are focused upon improving N management. Previous work in the 1990s in the Willamette Valley by researchers at Oregon State University determined the importance of cover crops and irrigation practices and made recommendations to the local farm community for reducing nitrogen (N) leaching. We are currently re-sampling many of the same fields studied by OSU to examine the influence of current crops and nutrient management practices on nitrate leaching below the rooting zone. This study represents important crops currently grown in the GWMA and includes four grass fields, three vegetable row-crop fields, two peppermint and wheat fields, and one each of hazelnuts and blueberries. New nutrient management practices include slow release fertilizers and precision agriculture approaches in some of the fields. Results from the first two years of sampling show nitrate leaching is lower in some crops like row crops grown for seed and higher in others like perennial rye grass seed when compared to the 1990s data. We will use field-level N input-output balances in order to determine the N use efficiency and compare this across crops and over time. The goal of this project is to provide information and tools that will help farmers, managers and conservation groups quantify the water quality benefits of management practices they are conducting or funding.

  5. An all-sky catalogue of solar-type dwarfs for exoplanetary transit surveys

    NASA Astrophysics Data System (ADS)

    Nascimbeni, V.; Piotto, G.; Ortolani, S.; Giuffrida, G.; Marrese, P. M.; Magrin, D.; Ragazzoni, R.; Pagano, I.; Rauer, H.; Cabrera, J.; Pollacco, D.; Heras, A. M.; Deleuil, M.; Gizon, L.; Granata, V.

    2016-12-01

    Most future surveys designed to discover transiting exoplanets, including TESS and PLATO, will target bright (V ≲ 13) and nearby solar-type stars having a spectral type later than F5. In order to enhance the probability of identifying transits, these surveys must cover a very large area on the sky, because of the intrinsically low areal density of bright targets. Unfortunately, no existing catalogue of stellar parameters is both deep and wide enough to provide a homogeneous input list. As the first Gaia data release exploitable for this purpose is expected to be released not earlier than late 2017, we have devised an improved reduced-proper-motion (RPM) method to discriminate late field dwarfs and giants by combining the fourth U.S. Naval Observatory CCD Astrograph Catalog (UCAC4) proper motions with AAVSO Photometric All-Sky Survey DR6 photometry, and relying on Radial Velocity Experiment DR4 as an external calibrator. The output, named UCAC4-RPM, is a publicly available, complete all-sky catalogue of solar-type dwarfs down to V ≃ 13.5, plus an extension to log g > 3.0 subgiants. The relatively low amount of contamination (defined as the fraction of false positives; <30 per cent) also makes UCAC4-RPM a useful tool for the past and ongoing ground-based transit surveys, which need to discard candidate signals originating from early-type or giant stars. As an application, we show how UCAC4-RPM may support the preparation of the TESS (that will map almost the entire sky) input catalogue and the input catalogue of PLATO, planned to survey more than half of the whole sky with exquisite photometric precision.

  6. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  7. Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround.

    PubMed

    Boucsein, Clemens; Nawrot, Martin P; Schnepel, Philipp; Aertsen, Ad

    2011-01-01

    Current concepts of cortical information processing and most cortical network models largely rest on the assumption that well-studied properties of local synaptic connectivity are sufficient to understand the generic properties of cortical networks. This view seems to be justified by the observation that the vertical connectivity within local volumes is strong, whereas horizontally, the connection probability between pairs of neurons drops sharply with distance. Recent neuroanatomical studies, however, have emphasized that a substantial fraction of synapses onto neocortical pyramidal neurons stems from cells outside the local volume. Here, we discuss recent findings on the signal integration from horizontal inputs, showing that they could serve as a substrate for reliable and temporally precise signal propagation. Quantification of connection probabilities and parameters of synaptic physiology as a function of lateral distance indicates that horizontal projections constitute a considerable fraction, if not the majority, of inputs from within the cortical network. Taking these non-local horizontal inputs into account may dramatically change our current view on cortical information processing.

  8. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  9. Heat input and accumulation for ultrashort pulse processing with high average power

    NASA Astrophysics Data System (ADS)

    Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart

    2018-05-01

    Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.

  10. Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition.

    PubMed

    Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A

    2016-08-01

    The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Autonomous molecular cascades for evaluation of cell surfaces

    NASA Astrophysics Data System (ADS)

    Rudchenko, Maria; Taylor, Steven; Pallavi, Payal; Dechkovskaia, Alesia; Khan, Safana; Butler, Vincent P., Jr.; Rudchenko, Sergei; Stojanovic, Milan N.

    2013-08-01

    Molecular automata are mixtures of molecules that undergo precisely defined structural changes in response to sequential interactions with inputs. Previously studied nucleic acid-based automata include game-playing molecular devices (MAYA automata) and finite-state automata for the analysis of nucleic acids, with the latter inspiring circuits for the analysis of RNA species inside cells. Here, we describe automata based on strand-displacement cascades directed by antibodies that can analyse cells by using their surface markers as inputs. The final output of a molecular automaton that successfully completes its analysis is the presence of a unique molecular tag on the cell surface of a specific subpopulation of lymphocytes within human blood cells.

  12. Voltage mode electronically tunable full-wave rectifier

    NASA Astrophysics Data System (ADS)

    Petrović, Predrag B.; Vesković, Milan; Đukić, Slobodan

    2017-01-01

    The paper presents a new realization of bipolar full-wave rectifier of input sinusoidal signals, employing one MO-CCCII (multiple output current controlled current conveyor), a zero-crossing detector (ZCD), and one resistor connected to fixed potential. The circuit provides the operating frequency up to 10 MHz with increased linearity and precision in processing of input voltage signal, with a very low harmonic distortion. The errors related to the signal processing and errors bound were investigated and provided in the paper. The PSpice simulations are depicted and agree well with the theoretical anticipation. The maximum power consumption of the converter is approximately 2.83 mW, at ±1.2 V supply voltages.

  13. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  14. Population pharmacokinetic modelling of tramadol using inverse Gaussian function for the assessment of drug absorption from prolonged and immediate release formulations.

    PubMed

    Brvar, Nina; Mateović-Rojnik, Tatjana; Grabnar, Iztok

    2014-10-01

    This study aimed to develop a population pharmacokinetic model for tramadol that combines different input rates with disposition characteristics. Data used for the analysis were pooled from two phase I bioavailability studies with immediate (IR) and prolonged release (PR) formulations in healthy volunteers. Tramadol plasma concentration-time data were described by an inverse Gaussian function to model the complete input process linked to a two-compartment disposition model with first-order elimination. Although polymorphic CYP2D6 appears to be a major enzyme involved in the metabolism of tramadol, application of a mixture model to test the assumption of two and three subpopulations did not reveal any improvement of the model. The final model estimated parameters with reasonable precision and was able to estimate the interindividual variability of all parameters except for the relative bioavailability of PR vs. IR formulation. Validity of the model was further tested using the nonparametric bootstrap approach. Finally, the model was applied to assess absorption kinetics of tramadol and predict steady-state pharmacokinetics following administration of both types of formulations. For both formulations, the final model yielded a stable estimate of the absorption time profiles. Steady-state simulation supports switching of patients from IR to PR formulation. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioning.

    PubMed

    Kalmbach, Brian E; Ohyama, Tatsuya; Mauk, Michael D

    2010-08-01

    Trace eyelid conditioning is a form of associative learning that requires several forebrain structures and cerebellum. Previous work suggests that at least two conditioned stimulus (CS)-driven signals are available to the cerebellum via mossy fiber inputs during trace conditioning: one driven by and terminating with the tone and a second driven by medial prefrontal cortex (mPFC) that persists through the stimulus-free trace interval to overlap in time with the unconditioned stimulus (US). We used electric stimulation of mossy fibers to determine whether this pattern of dual inputs is necessary and sufficient for cerebellar learning to express normal trace eyelid responses. We find that presenting the cerebellum with one input that mimics persistent activity observed in mPFC and the lateral pontine nuclei during trace eyelid conditioning and another that mimics tone-elicited mossy fiber activity is sufficient to produce responses whose properties quantitatively match trace eyelid responses using a tone. Probe trials with each input delivered separately provide evidence that the cerebellum learns to respond to the mPFC-like input (that overlaps with the US) and learns to suppress responding to the tone-like input (that does not). This contributes to precisely timed responses and the well-documented influence of tone offset on the timing of trace responses. Computer simulations suggest that the underlying cerebellar mechanisms involve activation of different subsets of granule cells during the tone and during the stimulus-free trace interval. These results indicate that tone-driven and mPFC-like inputs are necessary and sufficient for the cerebellum to learn well-timed trace conditioned responses.

  16. Relative location prediction in CT scan images using convolutional neural networks.

    PubMed

    Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng

    2018-07-01

    Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Inverse-consistent rigid registration of CT and MR for MR-based planning and adaptive prostate radiation therapy

    NASA Astrophysics Data System (ADS)

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter; Dowling, Jason

    2014-03-01

    MRI-alone treatment planning and adaptive MRI-based prostate radiation therapy are two promising techniques that could significantly increase the accuracy of the curative dose delivery processes while reducing the total radiation dose. State-of-the-art methods rely on the registration of a patient MRI with a MR-CT atlas for the estimation of pseudo-CT [5]. This atlas itself is generally created by registering many CT and MRI pairs. Most registration methods are not symmetric, but the order of the images influences the result [8]. The computed transformation is therefore biased, introducing unwanted variability. This work examines how much a symmetric algorithm improves the registration. Methods: A robust symmetric registration algorithm is proposed that simultaneously optimises a half space transform and its inverse. During the registration process, the two input volumetric images are transformed to a common position in space, therefore minimising any computational bias. An asymmetrical implementation of the same algorithm was used for comparison purposes. Results: Whole pelvis MRI and CT scans from 15 prostate patients were registered, as in the creation of MR-CT atlases. In each case, two registrations were performed, with different input image orders, and the transformation error quantified. Mean residuals of 0.63±0.26 mm (translation) and (8.7±7.3) × 10--3 rad (rotation) were found for the asymmetrical implementation with corresponding values of 0.038±0.039 mm and (1.6 ± 1.3) × 10--3 rad for the proposed symmetric algorithm, a substantial improvement. Conclusions: The increased registration precision will enhance the generation of pseudo-CT from MRI for atlas based MR planning methods.

  18. The Role of Cognitive Style in the Collocational Knowledge Development of Iranian EFL Learners through Input Flood Treatment

    ERIC Educational Resources Information Center

    Mahvelati, Elaheh Hamed; Mukundan, Jayakaran

    2012-01-01

    The differences in cognitive style between individuals and the effect these differences can have on second language learning have long been recognized by educators and researchers. Hence, this issue is the focal center of the present study. More precisely, the purpose of this study was to investigate the role of participants' cognitive style…

  19. Task Effects on Linguistic Complexity and Accuracy: A Large-Scale Learner Corpus Analysis Employing Natural Language Processing Techniques

    ERIC Educational Resources Information Center

    Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar

    2017-01-01

    Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…

  20. The Role of Entrenchment in Children's and Adults' Performance on Grammaticality Judgment Tasks

    ERIC Educational Resources Information Center

    Theakston, Anna L.

    2004-01-01

    Between the ages of 3 and 7 years, children have been observed to produce verb argument structure overgeneralization errors (e.g., Don't giggle me; Bowerman, 1982, 1988; Pinker, 1989). A number of recent studies have begun to find evidence that the precise distributional properties of the input may provide an important part of the explanation for…

  1. Amplifier improvement circuit

    NASA Technical Reports Server (NTRS)

    Sturman, J.

    1968-01-01

    Stable input stage was designed for the use with a integrated circuit operational amplifier to provide improved performance as an instrumentation-type amplifier. The circuit provides high input impedance, stable gain, good common mode rejection, very low drift, and low output impedance.

  2. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  3. Programmable single-cell mammalian biocomputers.

    PubMed

    Ausländer, Simon; Ausländer, David; Müller, Marius; Wieland, Markus; Fussenegger, Martin

    2012-07-05

    Synthetic biology has advanced the design of standardized control devices that program cellular functions and metabolic activities in living organisms. Rational interconnection of these synthetic switches resulted in increasingly complex designer networks that execute input-triggered genetic instructions with precision, robustness and computational logic reminiscent of electronic circuits. Using trigger-controlled transcription factors, which independently control gene expression, and RNA-binding proteins that inhibit the translation of transcripts harbouring specific RNA target motifs, we have designed a set of synthetic transcription–translation control devices that could be rewired in a plug-and-play manner. Here we show that these combinatorial circuits integrated a two-molecule input and performed digital computations with NOT, AND, NAND and N-IMPLY expression logic in single mammalian cells. Functional interconnection of two N-IMPLY variants resulted in bitwise intracellular XOR operations, and a combinatorial arrangement of three logic gates enabled independent cells to perform programmable half-subtractor and half-adder calculations. Individual mammalian cells capable of executing basic molecular arithmetic functions isolated or coordinated to metabolic activities in a predictable, precise and robust manner may provide new treatment strategies and bio-electronic interfaces in future gene-based and cell-based therapies.

  4. Time-Based Readout of a Silicon Photomultiplier (SiPM) for Time of Flight Positron Emission Tomography (TOF-PET)

    NASA Astrophysics Data System (ADS)

    Powolny, F.; Auffray, E.; Brunner, S. E.; Garutti, E.; Goettlich, M.; Hillemanns, H.; Jarron, P.; Lecoq, P.; Meyer, T.; Schultz-Coulon, H. C.; Shen, W.; Williams, M. C. S.

    2011-06-01

    Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Ω input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 × 3 mm2 SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 × 3 × 15 mm3 LSO crystal coupled to a SiPM. The measured time coincidence precision and the limitations in the overall timing accuracy are interpreted using Monte Carlo/SPICE simulation, Poisson statistics, and geometric effects of the crystal.

  5. Control of cerebellar granule cell output by sensory-evoked Golgi cell inhibition

    PubMed Central

    Duguid, Ian; Branco, Tiago; Chadderton, Paul; Arlt, Charlotte; Powell, Kate; Häusser, Michael

    2015-01-01

    Classical feed-forward inhibition involves an excitation–inhibition sequence that enhances the temporal precision of neuronal responses by narrowing the window for synaptic integration. In the input layer of the cerebellum, feed-forward inhibition is thought to preserve the temporal fidelity of granule cell spikes during mossy fiber stimulation. Although this classical feed-forward inhibitory circuit has been demonstrated in vitro, the extent to which inhibition shapes granule cell sensory responses in vivo remains unresolved. Here we combined whole-cell patch-clamp recordings in vivo and dynamic clamp recordings in vitro to directly assess the impact of Golgi cell inhibition on sensory information transmission in the granule cell layer of the cerebellum. We show that the majority of granule cells in Crus II of the cerebrocerebellum receive sensory-evoked phasic and spillover inhibition prior to mossy fiber excitation. This preceding inhibition reduces granule cell excitability and sensory-evoked spike precision, but enhances sensory response reproducibility across the granule cell population. Our findings suggest that neighboring granule cells and Golgi cells can receive segregated and functionally distinct mossy fiber inputs, enabling Golgi cells to regulate the size and reproducibility of sensory responses. PMID:26432880

  6. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  7. Head-target tracking control of well drilling

    NASA Astrophysics Data System (ADS)

    Agzamov, Z. V.

    2018-05-01

    The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.

  8. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  9. Characterization of Fluorescent Polystyrene Microspheres for Advanced Flow Diagnostics

    NASA Technical Reports Server (NTRS)

    Maisto, Pietro M. F.; Lowe, K. Todd; Byun, Guibo; Simpson, Roger; Vercamp, Max; Danley, Jason E.; Koh, Brian; Tiemsin, Pacita; Danehy, Paul M.; Wohl, Christopher J.

    2013-01-01

    Fluorescent dye-doped polystyrene latex microspheres (PSLs) are being developed for velocimetry and scalar measurements in variable property flows. Two organic dyes, Rhodamine B (RhB) and dichlorofluorescence (DCF), are examined to assess laser-induced fluorescence (LIF) properties for flow imaging applications and single-shot temperature measurements. A major interest in the current research is the application of safe dyes, thus DCF is of particular interest, while RhB is used as a benchmark. Success is demonstrated for single-point laser Doppler velocimetry (LDV) and also imaging fluorescence, excited via a continuous wave 2 W laser beam, for exposures down to 10 ms. In contrast, when exciting with a pulsed Nd:YAG laser at 200 mJ/pulse, no fluorescence was detected, even when integrating tens of pulses. We show that this is due to saturation of the LIF signal at relatively low excitation intensities, 4-5 orders of magnitude lower than the pulsed laser intensity. A two-band LIF technique is applied in a heated jet, indicating that the technique effectively removes interfering inputs such as particle diameter variation. Temperature measurement uncertainties are estimated based upon the variance measured for the two-band LIF intensity ratio and the achievable dye temperature sensitivity, indicating that particles developed to date may provide about +/-12.5 C precision, while future improvements in dye temperature sensitivity and signal quality may enable single-shot temperature measurements with sub-degree precision.

  10. Aerosol optical depth in the European Brewer Network

    NASA Astrophysics Data System (ADS)

    López-Solano, Javier; Redondas, Alberto; Carlund, Thomas; Rodriguez-Franco, Juan J.; Diémoz, Henri; León-Luis, Sergio F.; Hernández-Cruz, Bentorey; Guirado-Fuentes, Carmen; Kouremeti, Natalia; Gröbner, Julian; Kazadzis, Stelios; Carreño, Virgilio; Berjón, Alberto; Santana-Díaz, Daniel; Rodríguez-Valido, Manuel; De Bock, Veerle; Moreta, Juan R.; Rimmer, John; Smedley, Andrew R. D.; Boulkelia, Lamine; Jepsen, Nis; Eriksen, Paul; Bais, Alkiviadis F.; Shirotov, Vadim; Vilaplana, José M.; Wilson, Keith M.; Karppinen, Tomi

    2018-03-01

    Aerosols play an important role in key atmospheric processes and feature high spatial and temporal variabilities. This has motivated scientific interest in the development of networks capable of measuring aerosol properties over large geographical areas in near-real time. In this work we present and discuss results of an aerosol optical depth (AOD) algorithm applied to instruments of the European Brewer Network. This network is comprised of close to 50 Brewer spectrophotometers, mostly located in Europe and adjacent areas, although instruments operating at, for example, South America and Australia are also members. Although we only show results for instruments calibrated by the Regional Brewer Calibration Center for Europe, the implementation of the AOD algorithm described is intended to be used by the whole network in the future. Using data from the Brewer intercomparison campaigns in the years 2013 and 2015, and the period in between, plus comparisons with Cimel sun photometers and UVPFR instruments, we check the precision, stability, and uncertainty of the Brewer AOD in the ultraviolet range from 300 to 320 nm. Our results show a precision better than 0.01, an uncertainty of less than 0.05, and, for well-maintained instruments, a stability similar to that of the ozone measurements. We also discuss future improvements to our algorithm with respect to the input data, their processing, and the characterization of the Brewer instruments for the measurement of AOD.

  11. Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex

    2004-01-01

    There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

  12. Explanation and elaboration of the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines, V.2.0: examples of SQUIRE elements in the healthcare improvement literature.

    PubMed

    Goodman, Daisy; Ogrinc, Greg; Davies, Louise; Baker, G Ross; Barnsteiner, Jane; Foster, Tina C; Gali, Kari; Hilden, Joanne; Horwitz, Leora; Kaplan, Heather C; Leis, Jerome; Matulis, John C; Michie, Susan; Miltner, Rebecca; Neily, Julia; Nelson, William A; Niedner, Matthew; Oliver, Brant; Rutman, Lori; Thomson, Richard; Thor, Johan

    2016-12-01

    Since its publication in 2008, SQUIRE (Standards for Quality Improvement Reporting Excellence) has contributed to the completeness and transparency of reporting of quality improvement work, providing guidance to authors and reviewers of reports on healthcare improvement work. In the interim, enormous growth has occurred in understanding factors that influence the success, and failure, of healthcare improvement efforts. Progress has been particularly strong in three areas: the understanding of the theoretical basis for improvement work; the impact of contextual factors on outcomes; and the development of methodologies for studying improvement work. Consequently, there is now a need to revise the original publication guidelines. To reflect the breadth of knowledge and experience in the field, we solicited input from a wide variety of authors, editors and improvement professionals during the guideline revision process. This Explanation and Elaboration document (E&E) is a companion to the revised SQUIRE guidelines, SQUIRE 2.0. The product of collaboration by an international and interprofessional group of authors, this document provides examples from the published literature, and an explanation of how each reflects the intent of a specific item in SQUIRE. The purpose of the guidelines is to assist authors in writing clearly, precisely and completely about systematic efforts to improve the quality, safety and value of healthcare services. Authors can explore the SQUIRE statement, this E&E and related documents in detail at http://www.squire-statement.org. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Precision of coherence analysis to detect cerebral autoregulation by near-infrared spectroscopy in preterm infants

    NASA Astrophysics Data System (ADS)

    Hahn, Gitte Holst; Christensen, Karl Bang; Leung, Terence S.; Greisen, Gorm

    2010-05-01

    Coherence between spontaneous fluctuations in arterial blood pressure (ABP) and the cerebral near-infrared spectroscopy signal can detect cerebral autoregulation. Because reliable measurement depends on signals with high signal-to-noise ratio, we hypothesized that coherence is more precisely determined when fluctuations in ABP are large rather than small. Therefore, we investigated whether adjusting for variability in ABP (variabilityABP) improves precision. We examined the impact of variabilityABP within the power spectrum in each measurement and between repeated measurements in preterm infants. We also examined total monitoring time required to discriminate among infants with a simulation study. We studied 22 preterm infants (GA<30) yielding 215 10-min measurements. Surprisingly, adjusting for variabilityABP within the power spectrum did not improve the precision. However, adjusting for the variabilityABP among repeated measurements (i.e., weighting measurements with high variabilityABP in favor of those with low) improved the precision. The evidence of drift in individual infants was weak. Minimum monitoring time needed to discriminate among infants was 1.3-3.7 h. Coherence analysis in low frequencies (0.04-0.1 Hz) had higher precision and statistically more power than in very low frequencies (0.003-0.04 Hz). In conclusion, a reliable detection of cerebral autoregulation takes hours and the precision is improved by adjusting for variabilityABP between repeated measurements.

  14. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  15. The SLH framework for modeling quantum input-output networks

    DOE PAGES

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    2017-09-04

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  16. The SLH framework for modeling quantum input-output networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  17. Dynamical Adaptation in Photoreceptors

    PubMed Central

    Clark, Damon A.; Benichou, Raphael; Meister, Markus; Azeredo da Silveira, Rava

    2013-01-01

    Adaptation is at the heart of sensation and nowhere is it more salient than in early visual processing. Light adaptation in photoreceptors is doubly dynamical: it depends upon the temporal structure of the input and it affects the temporal structure of the response. We introduce a non-linear dynamical adaptation model of photoreceptors. It is simple enough that it can be solved exactly and simulated with ease; analytical and numerical approaches combined provide both intuition on the behavior of dynamical adaptation and quantitative results to be compared with data. Yet the model is rich enough to capture intricate phenomenology. First, we show that it reproduces the known phenomenology of light response and short-term adaptation. Second, we present new recordings and demonstrate that the model reproduces cone response with great precision. Third, we derive a number of predictions on the response of photoreceptors to sophisticated stimuli such as periodic inputs, various forms of flickering inputs, and natural inputs. In particular, we demonstrate that photoreceptors undergo rapid adaptation of response gain and time scale, over ∼ 300 ms—i. e., over the time scale of the response itself—and we confirm this prediction with data. For natural inputs, this fast adaptation can modulate the response gain more than tenfold and is hence physiologically relevant. PMID:24244119

  18. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  19. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  20. Charge collection and non-ionizing radiation tolerance of CMOS pixel sensors using a 0.18 μm CMOS process

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Zhu, Hongbo; Zhang, Liang; Fu, Min

    2016-09-01

    The proposed Circular Electron Positron Collider (CEPC) will be primarily aimed for precision measurements of the discovered Higgs boson. Its innermost vertex detector, which will play a critical role in heavy-flavor tagging, must be constructed with fine-pitched silicon pixel sensors with low power consumption and fast readout. CMOS pixel sensor (CPS), as one of the most promising candidate technologies, has already demonstrated its excellent performance in several high energy physics experiments. Therefore it has been considered for R&D for the CEPC vertex detector. In this paper, we present the preliminary studies to improve the collected signal charge over the equivalent input capacitance ratio (Q / C), which will be crucial to reduce the analog power consumption. We have performed detailed 3D device simulation and evaluated potential impacts from diode geometry, epitaxial layer properties and non-ionizing radiation damage. We have proposed a new approach to improve the treatment of the boundary conditions in simulation. Along with the TCAD simulation, we have designed the exploratory prototype utilizing the TowerJazz 0.18 μm CMOS imaging sensor process and we will verify the simulation results with future measurements.

  1. Natural methods for increasing reproductive efficiency in small ruminants.

    PubMed

    Martin, G B; Milton, J T B; Davidson, R H; Banchero Hunzicker, G E; Lindsay, D R; Blache, D

    2004-07-01

    This paper describes three strategies to improve the reproductive performance of small ruminants in ways that lead to "clean, green and ethical" animal production. The first is aimed at control of the timing of reproductive events for which we turn to the socio-sexual inputs of the "male effect" to induce synchronised ovulation in females that would otherwise be anovulatory. The second strategy, "focussed feeding", is based on our knowledge of the responses to nutrition and aims to develop short programs of nutritional supplements that are precisely timed and specifically designed for individual events in the reproductive process, such as gamete production, embryo survival, fetal programming and colostrum production. The third strategy aims to maximise offspring survival by a combination of management, nutrition and genetic selection for behavior (temperament). All of these approaches involve non-pharmacological manipulation of the endogenous control systems of the animals and complement the detailed information from ultrasound that is now becoming available. The use of such clean, green and ethical tools in the management of our animals can be cost-effective, increase productivity and, at the same time, greatly improve the image of meat and milk industries in society and the marketplace.

  2. MusiteDeep: a deep-learning framework for general and kinase-specific phosphorylation site prediction.

    PubMed

    Wang, Duolin; Zeng, Shuai; Xu, Chunhui; Qiu, Wangren; Liang, Yanchun; Joshi, Trupti; Xu, Dong

    2017-12-15

    Computational methods for phosphorylation site prediction play important roles in protein function studies and experimental design. Most existing methods are based on feature extraction, which may result in incomplete or biased features. Deep learning as the cutting-edge machine learning method has the ability to automatically discover complex representations of phosphorylation patterns from the raw sequences, and hence it provides a powerful tool for improvement of phosphorylation site prediction. We present MusiteDeep, the first deep-learning framework for predicting general and kinase-specific phosphorylation sites. MusiteDeep takes raw sequence data as input and uses convolutional neural networks with a novel two-dimensional attention mechanism. It achieves over a 50% relative improvement in the area under the precision-recall curve in general phosphorylation site prediction and obtains competitive results in kinase-specific prediction compared to other well-known tools on the benchmark data. MusiteDeep is provided as an open-source tool available at https://github.com/duolinwang/MusiteDeep. xudong@missouri.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Laser technology for high precision satellite tracking

    NASA Technical Reports Server (NTRS)

    Plotkin, H. H.

    1974-01-01

    Fixed and mobile laser ranging stations have been developed to track satellites equipped with retro-reflector arrays. These have operated consistently at data rates of once per second with range precision better than 50 cm, using Q-switched ruby lasers with pulse durations of 20 to 40 nanoseconds. Improvements are being incorporated to improve the precision to 10 cm, and to permit ranging to more distant satellites. These include improved reflector array designs, processing and analysis of the received reflection pulses, and use of sub-nanosecond pulse duration lasers.

  4. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  5. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare

    PubMed Central

    Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian

    2015-01-01

    Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao’s garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework. PMID:26146645

  7. PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare.

    PubMed

    Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian

    2014-10-01

    Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao's garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework.

  8. 42 CFR 460.138 - Committees with community input.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.138 Committees with... following: (a) Evaluate data collected pertaining to quality outcome measures. (b) Address the implementation of, and results from, the quality assessment and performance improvement plan. (c) Provide input...

  9. 42 CFR 460.138 - Committees with community input.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.138 Committees with... following: (a) Evaluate data collected pertaining to quality outcome measures. (b) Address the implementation of, and results from, the quality assessment and performance improvement plan. (c) Provide input...

  10. 42 CFR 460.138 - Committees with community input.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.138 Committees with... following: (a) Evaluate data collected pertaining to quality outcome measures. (b) Address the implementation of, and results from, the quality assessment and performance improvement plan. (c) Provide input...

  11. 42 CFR 460.138 - Committees with community input.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.138 Committees with... following: (a) Evaluate data collected pertaining to quality outcome measures. (b) Address the implementation of, and results from, the quality assessment and performance improvement plan. (c) Provide input...

  12. 42 CFR 460.138 - Committees with community input.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FOR THE ELDERLY (PACE) Quality Assessment and Performance Improvement § 460.138 Committees with... following: (a) Evaluate data collected pertaining to quality outcome measures. (b) Address the implementation of, and results from, the quality assessment and performance improvement plan. (c) Provide input...

  13. Derivation and precision of mean field electrodynamics with mesoscale fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2018-06-01

    Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.

  14. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  15. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  16. Effects of synchronous irradiance monitoring and correction of current-voltage curves on the outdoor performance measurements of photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Hishikawa, Yoshihiro; Doi, Takuya; Higa, Michiya; Ohshima, Hironori; Takenouchi, Takakazu; Yamagoe, Kengo

    2017-08-01

    Precise outdoor measurement of the current-voltage (I-V) curves of photovoltaic (PV) modules is desired for many applications such as low-cost onsite performance measurement, monitoring, and diagnosis. Conventional outdoor measurement technologies have a problem in that their precision is low when the solar irradiance is unstable, hence, limiting the opportunity of precise measurement only on clear sunny days. The purpose of this study is to investigate an outdoor measurement procedure, that can improve both the measurement opportunity and precision. Fast I-V curve measurements within 0.2 s and synchronous measurement of irradiance using a PV module irradiance sensor very effectively improved the precision. A small standard deviation (σ) of the module’s maximum output power (P max) in the range of 0.7-0.9% is demonstrated, based on the basis of a 6 month experiment, that mainly includes partly sunny days and cloudy days, during which the solar irradiance is unstable. The σ was further improved to 0.3-0.5% by correcting the curves for the small variation of irradiance. This indicates that the procedure of this study enables much more reproducible I-V curve measurements than a conventional usual procedure under various climatic conditions. Factors that affect measurement results are discussed, to further improve the precision.

  17. The effect of technical replicate (repeats) on Nix Pro Color Sensor™ measurement precision for meat: A case-study on aged beef colour stability.

    PubMed

    Holman, Benjamin W B; Collins, Damian; Kilgannon, Ashleigh K; Hopkins, David L

    2018-01-01

    The Nix Pro Colour Sensor™ (NIX) can be potentially used to measure meat colour, but procedural guidelines that assure measurement reproducibility and repeatability (precision) must first be established. Technical replicate number (r) will minimise response variation, measureable as standard error of predicted mean (SEM), and contribute to improved precision. Consequently, we aimed to explore the effects of r on NIX precision when measuring aged beef colour (colorimetrics; L*, a*, b*, hue and chroma values). Each colorimetric SEM declined with increasing r to indicate improved precision and followed a diminishing rate of improvement that allowed us to recommend r=7 for meat colour studies using the NIX. This definition was based on practical limitations and a* variability, as additional r would be required if other colorimetrics or advanced levels of precision are necessary. Beef ageing and display period, holding temperature, loin and sampled portion were also found to contribute to colorimetric variation, but were incorporated within our definition of r. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Sustainable management of agriculture activity on areas with soil vulnerability to compaction trough a developed decision support system (DSS)

    NASA Astrophysics Data System (ADS)

    Moretto, Johnny; Fantinato, Luciano; Rasera, Roberto

    2017-04-01

    One of the main environmental effects of agriculture is the negative impacts on areas with soil vulnerability to compaction and undersurface water derived from inputs and treatment distributions. A solution may represented from the "Precision Farming". Precision Farming refers to a management concept focusing on (near-real time) observation, measurement and responses to inter- and intra-variability in crops, fields and animals. Potential benefits may include increasing crop yields and animal performance, cost and labour reduction and optimisation of process inputs, all of which would increase profitability. At the same time, Precision Farming should increase work safety and reduce the environmental impacts of agriculture and farming practices, thus contributing to the sustainability of agricultural production. The concept has been made possible by the rapid development of ICT-based sensor technologies and procedures along with dedicated software that, in the case of arable farming, provides the link between spatially-distributed variables and appropriate farming practices such as tillage, seeding, fertilisation, herbicide and pesticide application, and harvesting. Much progress has been made in terms of technical solutions, but major steps are still required for the introduction of this approach over the common agricultural practices. There are currently a large number of sensors capable of collecting data for various applications (e.g. Index of vegetation vigor, soil moisture, Digital Elevation Models, meteorology, etc.). The resulting large volumes of data need to be standardised, processed and integrated using metadata analysis of spatial information, to generate useful input for decision-support systems. In this context, a user-friendly IT applications has been developed, for organizing and processing large volumes of data from different types of remote sensing and meteorological sensors, and for integrating these data into user-friendly farm management support systems able to support the farm manager. In this applications will be possible to implement numerical models to support the farm manager on the best time to work in field and/or the best trajectory to follow with a GPS navigation system on soil vulnerability to compaction. In addition to provide "as applied map" to indicate in each part of the field the exact needed quantity of inputs and treatments. This new working models for data management will allow to a most efficient resource usage contributing in a more sustainable agriculture both for a more economic benefits for the farmers and for reduction of environmental soil and undersurface water impacts.

  19. Integration of cortical and pallidal inputs in the basal ganglia-recipient thalamus of singing birds

    PubMed Central

    Goldberg, Jesse H.; Farries, Michael A.

    2012-01-01

    The basal ganglia-recipient thalamus receives inhibitory inputs from the pallidum and excitatory inputs from cortex, but it is unclear how these inputs interact during behavior. We recorded simultaneously from thalamic neurons and their putative synaptically connected pallidal inputs in singing zebra finches. We find, first, that each pallidal spike produces an extremely brief (∼5 ms) pulse of inhibition that completely suppresses thalamic spiking. As a result, thalamic spikes are entrained to pallidal spikes with submillisecond precision. Second, we find that the number of thalamic spikes that discharge within a single pallidal interspike interval (ISI) depends linearly on the duration of that interval but does not depend on pallidal activity prior to the interval. In a detailed biophysical model, our results were not easily explained by the postinhibitory “rebound” mechanism previously observed in anesthetized birds and in brain slices, nor could most of our data be characterized as “gating” of excitatory transmission by inhibitory pallidal input. Instead, we propose a novel “entrainment” mechanism of pallidothalamic transmission that highlights the importance of an excitatory conductance that drives spiking, interacting with brief pulses of pallidal inhibition. Building on our recent finding that cortical inputs can drive syllable-locked rate modulations in thalamic neurons during singing, we report here that excitatory inputs affect thalamic spiking in two ways: by shortening the latency of a thalamic spike after a pallidal spike and by increasing thalamic firing rates within individual pallidal ISIs. We present a unifying biophysical model that can reproduce all known modes of pallidothalamic transmission—rebound, gating, and entrainment—depending on the amount of excitation the thalamic neuron receives. PMID:22673333

  20. Novel Models of Visual Topographic Map Alignment in the Superior Colliculus

    PubMed Central

    El-Ghazawi, Tarek A.; Triplett, Jason W.

    2016-01-01

    The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309

  1. The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vercesi, J.; Surian, E.

    Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less

  2. Fold-change detection and scalar symmetry of sensory input fields.

    PubMed

    Shoval, Oren; Goentoro, Lea; Hart, Yuval; Mayo, Avi; Sontag, Eduardo; Alon, Uri

    2010-09-07

    Recent studies suggest that certain cellular sensory systems display fold-change detection (FCD): a response whose entire shape, including amplitude and duration, depends only on fold changes in input and not on absolute levels. Thus, a step change in input from, for example, level 1 to 2 gives precisely the same dynamical output as a step from level 2 to 4, because the steps have the same fold change. We ask what the benefit of FCD is and show that FCD is necessary and sufficient for sensory search to be independent of multiplying the input field by a scalar. Thus, the FCD search pattern depends only on the spatial profile of the input and not on its amplitude. Such scalar symmetry occurs in a wide range of sensory inputs, such as source strength multiplying diffusing/convecting chemical fields sensed in chemotaxis, ambient light multiplying the contrast field in vision, and protein concentrations multiplying the output in cellular signaling systems. Furthermore, we show that FCD entails two features found across sensory systems, exact adaptation and Weber's law, but that these two features are not sufficient for FCD. Finally, we present a wide class of mechanisms that have FCD, including certain nonlinear feedback and feed-forward loops. We find that bacterial chemotaxis displays feedback within the present class and hence, is expected to show FCD. This can explain experiments in which chemotaxis searches are insensitive to attractant source levels. This study, thus, suggests a connection between properties of biological sensory systems and scalar symmetry stemming from physical properties of their input fields.

  3. Noise Suppression and Surplus Synchrony by Coincidence Detection

    PubMed Central

    Schultze-Kraft, Matthias; Diesmann, Markus; Grün, Sonja; Helias, Moritz

    2013-01-01

    The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. PMID:23592953

  4. Plasma arc welding repair of space flight hardware

    NASA Technical Reports Server (NTRS)

    Hoffman, David S.

    1993-01-01

    A technique to weld repair the main combustion chamber of Space Shuttle Main Engines has been developed. The technique uses the plasma arc welding process and active cooling to seal cracks and pinholes in the hot-gas wall of the main combustion chamber liner. The liner hot-gas wall is made of NARloy-Z, a copper alloy previously thought to be unweldable using conventional arc welding processes. The process must provide extensive heat input to melt the high conductivity NARloy-Z while protecting the delicate structure of the surrounding material. The higher energy density of the plasma arc process provides the necessary heat input while active water cooling protects the surrounding structure. The welding process is precisely controlled using a computerized robotic welding system.

  5. Multivariable control of a forward swept wing aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Quinn, W. W.

    1986-01-01

    The impact of independent canard and flaperon control of the longitudinal axis of a generic forward swept wing aircraft is examined. The Linear Quadratic Gaussian (LQG)/Loop Transfer Recovery (LTR) method is used to design three compensators: two single-input-single-output (SISO) systems, one with angle of attack as output and canard as control, the other with pitch attitude as output and canard as control, and a two-input-two-output system with both canard and flaperon controlling both the pitch attitude and angle of attack. The performances of the three systems are compared showing the addition of flaperon control allows the aircraft to perform in the precision control modes with very little loss of command following accuracy.

  6. Thalamic inhibition: diverse sources, diverse scales

    PubMed Central

    Halassa, Michael M.; Acsády, László

    2016-01-01

    The thalamus is the major source of cortical inputs shaping sensation, action and cognition. Thalamic circuits are targeted by two major inhibitory systems: the thalamic reticular nucleus (TRN) and extra-thalamic inhibitory (ETI) inputs. A unifying framework of how these systems operate is currently lacking. Here, we propose that TRN circuits are specialized to exert thalamic control at different spatiotemporal scales. Local inhibition of thalamic spike rates prevails during attentional selection whereas global inhibition more likely during sleep. In contrast, the ETI (arising from basal ganglia, zona incerta, anterior pretectum and pontine reticular formation) provides temporally-precise and focal inhibition, impacting spike timing. Together, these inhibitory systems allow graded control of thalamic output, enabling thalamocortical operations to dynamically match ongoing behavioral demands. PMID:27589879

  7. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  8. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  9. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4.

    PubMed

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-13

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, [Formula: see text] [Formula: see text], [Formula: see text] [Formula: see text], and [Formula: see text] [Formula: see text] for squared fields, and [Formula: see text] [Formula: see text] for an asymmetric rectangular field. Good agreement in terms of [Formula: see text] formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM's precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.

  10. [Application of target restoration space quantity and quantitative relation in precise esthetic prosthodontics].

    PubMed

    Haiyang, Yu; Tian, Luo

    2016-06-01

    Target restoration space (TRS) is the most precise space required for designing optimal prosthesis. TRS consists of an internal or external tooth space to confirm the esthetics and function of the final restoration. Therefore, assisted with quantitive analysis transfer, TRS quantitative analysis is a significant improvement for minimum tooth preparation. This article presents TRS quantity-related measurement, analysis, transfer, and internal relevance of three TR. classifications. Results reveal the close bond between precision and minimally invasive treatment. This study can be used to improve the comprehension and execution of precise esthetic prosthodontics.

  11. Computer modelling of cyclic deformation of high-temperature materials. Technical progress report, 16 November 1992-15 February 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duesbery, M.S.

    1993-02-26

    This program aims at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-scale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can bemore » transported by systematic procedures to the unknown macro-scale region. It may not be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism.« less

  12. Unintended and in situ amorphisation of pharmaceuticals.

    PubMed

    Priemel, P A; Grohganz, H; Rades, T

    2016-05-01

    Amorphisation of poorly water-soluble drugs is one approach that can be applied to improve their solubility and thus their bioavailability. Amorphisation is a process that usually requires deliberate external energy input. However, amorphisation can happen both unintentionally, as in process-induced amorphisation during manufacturing, or in situ during dissolution, vaporisation, or lipolysis. The systems in which unintended and in situ amorphisation has been observed normally contain a drug and a carrier. Common carriers include polymers and mesoporous silica particles. However, the precise mechanisms by which in situ amorphisation occurs are often not fully understood. In situ amorphisation can be exploited and performed before administration of the drug or possibly even within the gastrointestinal tract, as can be inferred from in situ amorphisation observed during in vitro lipolysis. The use of in situ amorphisation can thus confer the advantages of the amorphous form, such as higher apparent solubility and faster dissolution rate, without the disadvantage of its physical instability. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. International Standards for Genomes, Transcriptomes, and Metagenomes

    PubMed Central

    Mason, Christopher E.; Afshinnekoo, Ebrahim; Tighe, Scott; Wu, Shixiu; Levy, Shawn

    2017-01-01

    Challenges and biases in preparing, characterizing, and sequencing DNA and RNA can have significant impacts on research in genomics across all kingdoms of life, including experiments in single-cells, RNA profiling, and metagenomics (across multiple genomes). Technical artifacts and contamination can arise at each point of sample manipulation, extraction, sequencing, and analysis. Thus, the measurement and benchmarking of these potential sources of error are of paramount importance as next-generation sequencing (NGS) projects become more global and ubiquitous. Fortunately, a variety of methods, standards, and technologies have recently emerged that improve measurements in genomics and sequencing, from the initial input material to the computational pipelines that process and annotate the data. Here we review current standards and their applications in genomics, including whole genomes, transcriptomes, mixed genomic samples (metagenomes), and the modified bases within each (epigenomes and epitranscriptomes). These standards, tools, and metrics are critical for quantifying the accuracy of NGS methods, which will be essential for robust approaches in clinical genomics and precision medicine. PMID:28337071

  14. High-Precision Differential Predictions for Top-Quark Pairs at the LHC

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Heymes, David; Mitov, Alexander

    2016-02-01

    We present the first complete next-to-next-to-leading order (NNLO) QCD predictions for differential distributions in the top-quark pair production process at the LHC. Our results are derived from a fully differential partonic Monte Carlo calculation with stable top quarks which involves no approximations beyond the fixed-order truncation of the perturbation series. The NNLO corrections improve the agreement between existing LHC measurements [V. Khachatryan et al. (CMS Collaboration), Eur. Phys. J. C 75, 542 (2015)] and standard model predictions for the top-quark transverse momentum distribution, thus helping alleviate one long-standing discrepancy. The shape of the top-quark pair invariant mass distribution turns out to be stable with respect to radiative corrections beyond NLO which increases the value of this observable as a place to search for physics beyond the standard model. The results presented here provide essential input for parton distribution function fits, implementation of higher-order effects in Monte Carlo generators, as well as top-quark mass and strong coupling determination.

  15. High-Precision Differential Predictions for Top-Quark Pairs at the LHC.

    PubMed

    Czakon, Michal; Heymes, David; Mitov, Alexander

    2016-02-26

    We present the first complete next-to-next-to-leading order (NNLO) QCD predictions for differential distributions in the top-quark pair production process at the LHC. Our results are derived from a fully differential partonic Monte Carlo calculation with stable top quarks which involves no approximations beyond the fixed-order truncation of the perturbation series. The NNLO corrections improve the agreement between existing LHC measurements [V. Khachatryan et al. (CMS Collaboration), Eur. Phys. J. C 75, 542 (2015)] and standard model predictions for the top-quark transverse momentum distribution, thus helping alleviate one long-standing discrepancy. The shape of the top-quark pair invariant mass distribution turns out to be stable with respect to radiative corrections beyond NLO which increases the value of this observable as a place to search for physics beyond the standard model. The results presented here provide essential input for parton distribution function fits, implementation of higher-order effects in Monte Carlo generators, as well as top-quark mass and strong coupling determination.

  16. Development of the Software for 30 inch Telescope Control System at KHAO

    NASA Astrophysics Data System (ADS)

    Mun, B.-S.; Kim, S.-J.; Jang, M.; Min, S.-W.; Seol, K.-H.; Moon, K.-S.

    2006-12-01

    Even though 30inch optical telescope at Kyung Hee Astronomy Observatory has been used to produce a series of scientific achievements since its first light in 1992, numerous difficulties in the operation of the telescope have hindered the precise observations needed for further researches. Since the currently used PC-TCS (Personal Computer based Telescope Control system) software based on ISA-bus type is outdated, it doesn't have a user friendly interface and make it impossible to scale. Also accumulated errors which are generated by discordance from input and output signals into a motion controller required new control system. Thus we have improved the telescope control system by updating software and modifying mechanical parts. We applied a new BLDC (brushless DC) servo motor system to the mechanical parts of the telescope and developed a control software using Visual Basic 6.0. As a result, we could achieve a high accuracy in controlling of the telescope and use the userfriendly GUI (Graphic User Interface).

  17. Seismic signal auto-detecing from different features by using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.

    2017-12-01

    We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.

  18. Designed cell consortia as fragrance-programmable analog-to-digital converters.

    PubMed

    Müller, Marius; Ausländer, Simon; Spinnler, Andrea; Ausländer, David; Sikorski, Julian; Folcher, Marc; Fussenegger, Martin

    2017-03-01

    Synthetic biology advances the rational engineering of mammalian cells to achieve cell-based therapy goals. Synthetic gene networks have nearly reached the complexity of digital electronic circuits and enable single cells to perform programmable arithmetic calculations or to provide dynamic remote control of transgenes through electromagnetic waves. We designed a synthetic multilayered gaseous-fragrance-programmable analog-to-digital converter (ADC) allowing for remote control of digital gene expression with 2-bit AND-, OR- and NOR-gate logic in synchronized cell consortia. The ADC consists of multiple sampling-and-quantization modules sensing analog gaseous fragrance inputs; a gas-to-liquid transducer converting fragrance intensity into diffusible cell-to-cell signaling compounds; a digitization unit with a genetic amplifier circuit to improve the signal-to-noise ratio; and recombinase-based digital expression switches enabling 2-bit processing of logic gates. Synthetic ADCs that can remotely control cellular activities with digital precision may enable the development of novel biosensors and may provide bioelectronic interfaces synchronizing analog metabolic pathways with digital electronics.

  19. Multi-Target Angle Tracking Algorithm for Bistatic MIMO Radar Based on the Elements of the Covariance Matrix

    PubMed Central

    Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo

    2018-01-01

    In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar. PMID:29518957

  20. Computational Re-design of Synthetic Genetic Oscillators for Independent Amplitude and Frequency Modulation.

    PubMed

    Tomazou, Marios; Barahona, Mauricio; Polizzi, Karen M; Stan, Guy-Bart

    2018-04-25

    To perform well in biotechnology applications, synthetic genetic oscillators must be engineered to allow independent modulation of amplitude and period. This need is currently unmet. Here, we demonstrate computationally how two classic genetic oscillators, the dual-feedback oscillator and the repressilator, can be re-designed to provide independent control of amplitude and period and improve tunability-that is, a broad dynamic range of periods and amplitudes accessible through the input "dials." Our approach decouples frequency and amplitude modulation by incorporating an orthogonal "sink module" where the key molecular species are channeled for enzymatic degradation. This sink module maintains fast oscillation cycles while alleviating the translational coupling between the oscillator's transcription factors and output. We characterize the behavior of our re-designed oscillators over a broad range of physiologically reasonable parameters, explain why this facilitates broader function and control, and provide general design principles for building synthetic genetic oscillators that are more precisely controllable. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Measurement of radon concentration in super-Kamiokande's buffer gas

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Sekiya, H.; Tasaka, S.; Takeuchi, Y.; Wendell, R. A.; Matsubara, M.; Nakahata, M.

    2017-09-01

    To precisely measure radon concentrations in purified air supplied to the Super-Kamiokande detector as a buffer gas, we have developed a highly sensitive radon detector with an intrinsic background as low as 0 . 33 ± 0 . 07 mBq /m3. In this article, we discuss the construction and calibration of this detector as well as results of its application to the measurement and monitoring of the buffer gas layer above Super-Kamiokande. In March 2013, the chilled activated charcoal system used to remove radon in the input buffer gas was upgraded. After this improvement, a dramatic reduction in the radon concentration of the supply gas down to 0 . 08 ± 0 . 07 mBq /m3. Additionally, the Rn concentration of the in-situ buffer gas has been measured 28 . 8 ± 1 . 7 mBq /m3 using the new radon detector. Based on these measurements we have determined that the dominant source of Rn in the buffer gas arises from contamination from the Super-Kamiokande tank itself.

  2. Deep Space Network-Wide Portal Development: Planning Service Pilot Project

    NASA Technical Reports Server (NTRS)

    Doneva, Silviya

    2011-01-01

    The Deep Space Network (DSN) is an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. DSN provides the vital two-way communications link that guides and controls planetary explorers, and brings back the images and new scientific information they collect. In an attempt to streamline operations and improve overall services provided by the Deep Space Network a DSN-wide portal is under development. The project is one step in a larger effort to centralize the data collected from current missions including user input parameters for spacecraft to be tracked. This information will be placed into a principal repository where all operations related to the DSN are stored. Furthermore, providing statistical characterization of data volumes will help identify technically feasible tracking opportunities and more precise mission planning by providing upfront scheduling proposals. Business intelligence tools are to be incorporated in the output to deliver data visualization.

  3. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  4. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  5. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  6. Using Precision in STEM Language: A Qualitative Look

    ERIC Educational Resources Information Center

    Capraro, Mary M.; Bicer, Ali; Grant, Melva R.; Lincoln, Yvonna S.

    2017-01-01

    Teachers need to develop a variety of pedagogical strategies that can encourage precise and accurate communication--an extremely important 21st century skill. Precision with STEM oral language is essential. Emphasizing oral communication with precise language in combination with increased spatial skills with modeling can improve the chances of…

  7. Data-based fault-tolerant control of high-speed trains with traction/braking notch nonlinearities and actuator failures.

    PubMed

    Song, Qi; Song, Yong-Duan

    2011-12-01

    This paper investigates the position and velocity tracking control problem of high-speed trains with multiple vehicles connected through couplers. A dynamic model reflecting nonlinear and elastic impacts between adjacent vehicles as well as traction/braking nonlinearities and actuation faults is derived. Neuroadaptive fault-tolerant control algorithms are developed to account for various factors such as input nonlinearities, actuator failures, and uncertain impacts of in-train forces in the system simultaneously. The resultant control scheme is essentially independent of system model and is primarily data-driven because with the appropriate input-output data, the proposed control algorithms are capable of automatically generating the intermediate control parameters, neuro-weights, and the compensation signals, literally producing the traction/braking force based upon input and response data only--the whole process does not require precise information on system model or system parameter, nor human intervention. The effectiveness of the proposed approach is also confirmed through numerical simulations.

  8. From Drought to Flood: An Analysis of the Water Balance of the Tuolumne River Basin During Extreme Conditions (2015 - 2017)

    NASA Astrophysics Data System (ADS)

    Hedrick, A. R.; Marks, D. G.; Havens, S.; Robertson, M.; Johnson, M.; Sandusky, M.; Bormann, K. J.; Painter, T. H.

    2017-12-01

    Closing the water balance of a snow-dominated mountain basin has long been a focal point of the hydrologic sciences. This study attempts to more precisely quantify the solid precipitation inputs to a basin using the iSnobal energy balance snowmelt model and assimilated snow depth information from the Airborne Snow Observatory (ASO). Throughout the ablation seasons of three highly dissimilar consecutive water years (2015 - 2017), the ASO captured high resolution snow depth snapshots over the Tuolumne River Basin in California's Central Sierra Nevada. These measurements were used to periodically update the snow depth state variable of iSnobal, thereby nudging the estimates of water storage (snow water equivalent, or SWE) and melt (surface water input, or SWI) toward a more accurate solution. Once precipitation inputs and streamflow outputs are better constrained, the additional loss terms of the water mass balance equation (i.e. groundwater recharge and evapotranspiration) can be estimated with less uncertainty.

  9. Synaptic integration in dendrites: exceptional need for speed

    PubMed Central

    Golding, Nace L; Oertel, Donata

    2012-01-01

    Some neurons in the mammalian auditory system are able to detect and report the coincident firing of inputs with remarkable temporal precision. A strong, low-voltage-activated potassium conductance (gKL) at the cell body and dendrites gives these neurons sensitivity to the rate of depolarization by EPSPs, allowing neurons to assess the coincidence of the rising slopes of unitary EPSPs. Two groups of neurons in the brain stem, octopus cells in the posteroventral cochlear nucleus and principal cells of the medial superior olive (MSO), extract acoustic information by assessing coincident firing of their inputs over a submillisecond timescale and convey that information at rates of up to 1000 spikes s−1. Octopus cells detect the coincident activation of groups of auditory nerve fibres by broadband transient sounds, compensating for the travelling wave delay by dendritic filtering, while MSO neurons detect coincident activation of similarly tuned neurons from each of the two ears through separate dendritic tufts. Each makes use of filtering that is introduced by the spatial distribution of inputs on dendrites. PMID:22930273

  10. Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.

    PubMed

    Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz

    2014-04-21

    We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.

  11. Direct Midbrain Dopamine Input to the Suprachiasmatic Nucleus Accelerates Circadian Entrainment.

    PubMed

    Grippo, Ryan M; Purohit, Aarti M; Zhang, Qi; Zweifel, Larry S; Güler, Ali D

    2017-08-21

    Dopamine (DA) neurotransmission controls behaviors important for survival, including voluntary movement, reward processing, and detection of salient events, such as food or mate availability. Dopaminergic tone also influences circadian physiology and behavior. Although the evolutionary significance of this input is appreciated, its precise neurophysiological architecture remains unknown. Here, we identify a novel, direct connection between the DA neurons of the ventral tegmental area (VTA) and the suprachiasmatic nucleus (SCN). We demonstrate that D1 dopamine receptor (Drd1) signaling within the SCN is necessary for properly timed resynchronization of activity rhythms to phase-shifted light:dark cycles and that elevation of DA tone through selective activation of VTA DA neurons accelerates photoentrainment. Our findings demonstrate a previously unappreciated role for direct DA input to the master circadian clock and highlight the importance of an evolutionarily significant relationship between the circadian system and the neuromodulatory circuits that govern motivational behaviors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Determining Complementary Properties with Quantum Clones.

    PubMed

    Thekkadath, G S; Saaltink, R Y; Giner, L; Lundeen, J S

    2017-08-04

    In a classical world, simultaneous measurements of complementary properties (e.g., position and momentum) give a system's state. In quantum mechanics, measurement-induced disturbance is largest for complementary properties and, hence, limits the precision with which such properties can be determined simultaneously. It is tempting to try to sidestep this disturbance by copying the system and measuring each complementary property on a separate copy. However, perfect copying is physically impossible in quantum mechanics. Here, we investigate using the closest quantum analog to this copying strategy, optimal cloning. The coherent portion of the generated clones' state corresponds to "twins" of the input system. Like perfect copies, both twins faithfully reproduce the properties of the input system. Unlike perfect copies, the twins are entangled. As such, a measurement on both twins is equivalent to a simultaneous measurement on the input system. For complementary observables, this joint measurement gives the system's state, just as in the classical case. We demonstrate this experimentally using polarized single photons.

  13. Determining Complementary Properties with Quantum Clones

    NASA Astrophysics Data System (ADS)

    Thekkadath, G. S.; Saaltink, R. Y.; Giner, L.; Lundeen, J. S.

    2017-08-01

    In a classical world, simultaneous measurements of complementary properties (e.g., position and momentum) give a system's state. In quantum mechanics, measurement-induced disturbance is largest for complementary properties and, hence, limits the precision with which such properties can be determined simultaneously. It is tempting to try to sidestep this disturbance by copying the system and measuring each complementary property on a separate copy. However, perfect copying is physically impossible in quantum mechanics. Here, we investigate using the closest quantum analog to this copying strategy, optimal cloning. The coherent portion of the generated clones' state corresponds to "twins" of the input system. Like perfect copies, both twins faithfully reproduce the properties of the input system. Unlike perfect copies, the twins are entangled. As such, a measurement on both twins is equivalent to a simultaneous measurement on the input system. For complementary observables, this joint measurement gives the system's state, just as in the classical case. We demonstrate this experimentally using polarized single photons.

  14. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  15. The precision-processing subsystem for the Earth Resources Technology Satellite.

    NASA Technical Reports Server (NTRS)

    Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.

    1972-01-01

    Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.

  16. A Decision Support System for Optimum Use of Fertilizers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoskinson, Reed Louis; Hess, John Richard; Fink, Raymond Keith

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems’ infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend inmore » the agricultural decision-making process.« less

  17. A Decision Support System for Optimum Use of Fertilizers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Hoskinson; J. R. Hess; R. K. Fink

    1999-07-01

    The Decision Support System for Agriculture (DSS4Ag) is an expert system being developed by the Site-Specific Technologies for Agriculture (SST4Ag) precision farming research project at the INEEL. DSS4Ag uses state-of-the-art artificial intelligence and computer science technologies to make spatially variable, site-specific, economically optimum decisions on fertilizer use. The DSS4Ag has an open architecture that allows for external input and addition of new requirements and integrates its results with existing agricultural systems' infrastructures. The DSS4Ag reflects a paradigm shift in the information revolution in agriculture that is precision farming. We depict this information revolution in agriculture as an historic trend inmore » the agricultural decision-making process.« less

  18. Evaluation of visible and near-infrared spectroscopy as a tool for assessing fiber fineness during mechanical preparation of dew-retted flax.

    PubMed

    Sharma, H S S; Reinard, N

    2004-12-01

    Flax fiber must be mechanically prepared to improve fineness and homogeneity of the sliver before chemical processing and wet-spinning. The changes in fiber characteristics are monitored by an airflow method, which is labor intensive and requires 90 minutes to process one sample. This investigation was carried out to develop robust visible and near-infrared calibrations that can be used as a rapid tool for quality assessment of input fibers and changes in fineness at the doubling (blending), first, second, third, and fourth drawing frames, and at the roving stage. The partial least squares (PLS) and principal component regression (PCR) methods were employed to generate models from different segments of the spectra (400-1100, 1100-1700, 1100-2498, 1700-2498, and 400-2498 nm) and a calibration set consisting of 462 samples obtained from the six processing stages. The calibrations were successfully validated with an independent set of 97 samples, and standard errors of prediction of 2.32 and 2.62 dtex were achieved with the best PLS (400-2498 nm) and PCR (1100-2498 nm) models, respectively. An optimized PLS model of the visible-near-infrared (vis-NIR) spectra explained 97% of the variation (R(2) = 0.97) in the sample set with a standard error of calibration (SEC) of 2.45 dtex and a standard error of cross-validation (SECV) of 2.51 dtex R(2) = 0.96). The mean error of the reference airflow method was 1.56 dtex, which is more accurate than the NIR calibration. The improvement in fiber fineness of the validation set obtained from the six production lines was predicted with an error range of -6.47 to +7.19 dtex for input fibers, -1.44 to +5.77 dtex for blended fibers at the doubling, and -4.72 to +3.59 dtex at the drawing frame stages. This level of precision is adequate for wet-spinners to monitor fiber fineness of input fibers and during the preparation of fibers. The advantage of visNIR spectroscopy is the potential capability of the technique to assess fineness and other important quality characteristics of a fiber sample simultaneously in less than 30 minutes; the disadvantages are the expensive instrumentation and the expertise required for operating the instrument compared to the reference method. These factors need to be considered by the industry before installing an off-line NIR system for predicting quality parameters of input materials and changes in fiber characteristics during mechanical processing.

  19. Tunable laser techniques for improving the precision of observational astronomy

    NASA Astrophysics Data System (ADS)

    Cramer, Claire E.; Brown, Steven W.; Lykke, Keith R.; Woodward, John T.; Bailey, Stephen; Schlegel, David J.; Bolton, Adam S.; Brownstein, Joel; Doherty, Peter E.; Stubbs, Christopher W.; Vaz, Amali; Szentgyorgyi, Andrew

    2012-09-01

    Improving the precision of observational astronomy requires not only new telescopes and instrumentation, but also advances in observing protocols, calibrations and data analysis. The Laser Applications Group at the National Institute of Standards and Technology in Gaithersburg, Maryland has been applying advances in detector metrology and tunable laser calibrations to problems in astronomy since 2007. Using similar measurement techniques, we have addressed a number of seemingly disparate issues: precision flux calibration for broad-band imaging, precision wavelength calibration for high-resolution spectroscopy, and precision PSF mapping for fiber spectrographs of any resolution. In each case, we rely on robust, commercially-available laboratory technology that is readily adapted to use at an observatory. In this paper, we give an overview of these techniques.

  20. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  1. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  2. On the fusion of tuning parameters of fuzzy rules and neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.

  3. Routine and timely sub-picoNewton force stability and precision for biological applications of atomic force microscopy.

    PubMed

    Churnside, Allison B; Sullan, Ruby May A; Nguyen, Duc M; Case, Sara O; Bull, Matthew S; King, Gavin M; Perkins, Thomas T

    2012-07-11

    Force drift is a significant, yet unresolved, problem in atomic force microscopy (AFM). We show that the primary source of force drift for a popular class of cantilevers is their gold coating, even though they are coated on both sides to minimize drift. Drift of the zero-force position of the cantilever was reduced from 900 nm for gold-coated cantilevers to 70 nm (N = 10; rms) for uncoated cantilevers over the first 2 h after wetting the tip; a majority of these uncoated cantilevers (60%) showed significantly less drift (12 nm, rms). Removing the gold also led to ∼10-fold reduction in reflected light, yet short-term (0.1-10 s) force precision improved. Moreover, improved force precision did not require extended settling; most of the cantilevers tested (9 out of 15) achieved sub-pN force precision (0.54 ± 0.02 pN) over a broad bandwidth (0.01-10 Hz) just 30 min after loading. Finally, this precision was maintained while stretching DNA. Hence, removing gold enables both routine and timely access to sub-pN force precision in liquid over extended periods (100 s). We expect that many current and future applications of AFM can immediately benefit from these improvements in force stability and precision.

  4. Development of an ultrasonic linear motor with ultra-positioning capability and four driving feet.

    PubMed

    Zhu, Cong; Chu, Xiangcheng; Yuan, Songmei; Zhong, Zuojin; Zhao, Yanqiang; Gao, Shuning

    2016-12-01

    This paper presents a novel linear piezoelectric motor which is suitable for rapid ultra-precision positioning. The finite element analysis (FEA) was applied for optimal design and further analysis, then experiments were conducted to investigate its performance. By changing the input signal, the proposed motor was found capable of working in the fast driving mode as well as in the precision positioning mode. When working in the fast driving mode, the motor acts as an ultrasonic motor with maximum no-load speed up to 181.2mm/s and maximum thrust of 1.7N at 200Vp-p. Also, when working in precision positioning mode, the motor can be regarded as a flexible hinge piezoelectric actuator with arbitrary motion in the range of 8μm. The measurable minimum output displacement was found to be 0.08μm, but theoretically, can be even smaller. More importantly, the motor can be quickly and accurately positioned in a large stroke. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  6. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  7. Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models

    PubMed Central

    Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.

    2015-01-01

    Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858

  8. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  9. The relation between input-output transformation and gastrointestinal nematode infections on dairy farms.

    PubMed

    van der Voort, M; Van Meensel, J; Lauwers, L; Van Huylenbroeck, G; Charlier, J

    2016-02-01

    Efficiency analysis is used for assessing links between technical efficiency (TE) of livestock farms and animal diseases. However, previous studies often do not make the link with the allocation of inputs and mainly present average effects that ignore the often huge differences among farms. In this paper, we studied the relationship between exposure to gastrointestinal (GI) nematode infections, the TE and the input allocation on dairy farms. Although the traditional cost allocative efficiency (CAE) indicator adequately measures how a given input allocation differs from the cost-minimising input allocation, they do not represent the unique input allocation of farms. Similar CAE scores may be obtained for farms with different input allocations. Therefore, we propose an adjusted allocative efficiency index (AAEI) to measure the unique input allocation of farms. Combining this AAEI with the TE score allows determining the unique input-output position of each farm. The method is illustrated by estimating efficiency scores using data envelopment analysis (DEA) on a sample of 152 dairy farms in Flanders for which both accountancy and parasitic monitoring data were available. Three groups of farms with a different input-output position can be distinguished based on cluster analysis: (1) technically inefficient farms, with a relatively low use of concentrates per 100 l milk and a high exposure to infection, (2) farms with an intermediate TE, relatively high use of concentrates per 100 l milk and a low exposure to infection, (3) farms with the highest TE, relatively low roughage use per 100 l milk and a relatively high exposure to infection. Correlation analysis indicates for each group how the level of exposure to GI nematodes is associated or not with improved economic performance. The results suggest that improving both the economic performance and exposure to infection seems only of interest for highly TE farms. The findings indicate that current farm recommendations regarding GI nematode infections could be improved by also accounting for the allocation of inputs on the farm.

  10. Segmentation of the Knee for Analysis of Osteoarthritis

    NASA Astrophysics Data System (ADS)

    Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus

    Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.

  11. Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2015-04-01

    Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.

  12. Propagating synchrony in feed-forward networks

    PubMed Central

    Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc

    2013-01-01

    Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251

  13. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  14. A Unified Framework for Street-View Panorama Stitching

    PubMed Central

    Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei

    2016-01-01

    In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481

  15. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.

    PubMed

    Carvalho, Luis Alberto

    2005-02-01

    Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.

  16. The Analog Revolution and Its On-Going Role in Modern Analytical Measurements.

    PubMed

    Enke, Christie G

    2015-12-15

    The electronic revolution in analytical instrumentation began when we first exceeded the two-digit resolution of panel meters and chart recorders and then took the first steps into automated control. It started with the first uses of operational amplifiers (op amps) in the analog domain 20 years before the digital computer entered the analytical lab. Their application greatly increased both accuracy and precision in chemical measurement and they provided an elegant means for the electronic control of experimental quantities. Later, laboratory and personal computers provided an unlimited readout resolution and enabled programmable control of instrument parameters as well as storage and computation of acquired data. However, digital computers did not replace the op amp's critical role of converting the analog sensor's output to a robust and accurate voltage. Rather it added a new role: converting that voltage into a number. These analog operations are generally the limiting portions of our computerized instrumentation systems. Operational amplifier performance in gain, input current and resistance, offset voltage, and rise time have improved by a remarkable 3-4 orders of magnitude since their first implementations. Each 10-fold improvement has opened the doors for the development of new techniques in all areas of chemical analysis. Along with some interesting history, the multiple roles op amps play in modern instrumentation are described along with a number of examples of new areas of analysis that have been enabled by their improvements.

  17. Techniques for precise energy calibration of particle pixel detectors

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Campbell-Ricketts, T.; Bahadori, A.; Empl, A.

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  18. Techniques for precise energy calibration of particle pixel detectors.

    PubMed

    Kroupa, M; Campbell-Ricketts, T; Bahadori, A; Empl, A

    2017-03-01

    We demonstrate techniques to improve the accuracy of the energy calibration of Timepix pixel detectors, used for the measurement of energetic particles. The typical signal from such particles spreads among many pixels due to charge sharing effects. As a consequence, the deposited energy in each pixel cannot be reconstructed unless the detector is calibrated, limiting the usability of such signals for calibration. To avoid this shortcoming, we calibrate using low energy X-rays. However, charge sharing effects still occur, resulting in part of the energy being deposited in adjacent pixels and possibly lost. This systematic error in the calibration process results in an error of about 5% in the energy measurements of calibrated devices. We use FLUKA simulations to assess the magnitude of charge sharing effects, allowing a corrected energy calibration to be performed on several Timepix pixel detectors and resulting in substantial improvement in energy deposition measurements. Next, we address shortcomings in calibration associated with the huge range (from kiloelectron-volts to megaelectron-volts) of energy deposited per pixel which result in a nonlinear energy response over the full range. We introduce a new method to characterize the non-linear response of the Timepix detectors at high input energies. We demonstrate improvement using a broad range of particle types and energies, showing that the new method reduces the energy measurement errors, in some cases by more than 90%.

  19. 'Scalp coordinate system': a new tool to accurately describe cutaneous lesions on the scalp: a pilot study.

    PubMed

    Alexander, William; Miller, George; Alexander, Preeya; Henderson, Michael A; Webb, Angela

    2018-06-12

    Skin cancers are extremely common and the incidence increases with age. Care for patients with multiple or complicated skin cancers often require multidisciplinary input involving a general practitioner, dermatologist, plastic surgeon and/or radiation oncologist. Timely, efficient care of these patients relies on precise and effective communication between all parties. Until now, descriptions regarding the location of lesions on the scalp have been inaccurate, which can lead to error with the incorrect lesion being excised or biopsied. A novel technique for accurately and efficiently describing the location of lesions on the scalp, using a coordinate system, is described (the 'scalp coordinate system' (SCS)). This method was tested in a pilot study by clinicians typically involved in the care of patients with cutaneous malignancies. A mannequin scalp was used in the study. The SCS significantly improved the accuracy in the ability to both describe and locate lesions on the scalp. This improved accuracy comes at a minor time cost. The direct and indirect costs arising from poor communication between medical subspecialties (particularly relevant in surgical procedures) are immense. An effective tool used by all involved clinicians is long overdue particularly in patients with scalps with extensive actinic damage, scarring or innocuous biopsy sites. The SCS provides the opportunity to improve outcomes for both the patient and healthcare system. © 2018 Royal Australasian College of Surgeons.

  20. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.

  1. Improved disturbance rejection for predictor-based control of MIMO linear systems with input delay

    NASA Astrophysics Data System (ADS)

    Shi, Shang; Liu, Wenhui; Lu, Junwei; Chu, Yuming

    2018-02-01

    In this paper, we are concerned with the predictor-based control of multi-input multi-output (MIMO) linear systems with input delay and disturbances. By taking the future values of disturbances into consideration, a new improved predictive scheme is proposed. Compared with the existing predictive schemes, our proposed predictive scheme can achieve a finite-time exact state prediction for some smooth disturbances including the constant disturbances, and a better disturbance attenuation can also be achieved for a large class of other time-varying disturbances. The attenuation of mismatched disturbances for second-order linear systems with input delay is also investigated by using our proposed predictor-based controller.

  2. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  3. FPGA-Based Networked Phasemeter for a Heterodyne Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2009-01-01

    A document discusses a component of a laser metrology system designed to measure displacements along the line of sight with precision on the order of a tenth the diameter of an atom. This component, the phasemeter, measures the relative phase of two electrical signals and transfers that information to a computer. Because the metrology system measures the differences between two optical paths, the phasemeter has two inputs, called measure and reference. The reference signal is nominally a perfect square wave with a 50- percent duty cycle (though only rising edges are used). As the metrology system detects motion, the difference between the reference and measure signal phases is proportional to the displacement of the motion. The phasemeter, therefore, counts the elapsed time between rising edges in the two signals, and converts the time into an estimate of phase delay. The hardware consists of a circuit board that plugs into a COTS (commercial, off-the- shelf) Spartan-III FPGA (field-programmable gate array) evaluation board. It has two BNC inputs, (reference and measure), a CMOS logic chip to buffer the inputs, and an Ethernet jack for transmitting reduced-data to a PC. Two extra BNC connectors can be attached for future expandability, such as external synchronization. Each phasemeter handles one metrology channel. A bank of six phasemeters (and two zero-crossing detector cards) with an Ethernet switch can monitor the rigid body motion of an object. This device is smaller and cheaper than existing zero-crossing phasemeters. Also, because it uses Ethernet for communication with a computer, instead of a VME bridge, it is much easier to use. The phasemeter is a key part of the Precision Deployable Apertures and Structures strategic R&D effort to design large, deployable, segmented space telescopes.

  4. The IVS data input to ITRF2014

    NASA Astrophysics Data System (ADS)

    Nothnagel, Axel; Alef, Walter; Amagai, Jun; Andersen, Per Helge; Andreeva, Tatiana; Artz, Thomas; Bachmann, Sabine; Barache, Christophe; Baudry, Alain; Bauernfeind, Erhard; Baver, Karen; Beaudoin, Christopher; Behrend, Dirk; Bellanger, Antoine; Berdnikov, Anton; Bergman, Per; Bernhart, Simone; Bertarini, Alessandra; Bianco, Giuseppe; Bielmaier, Ewald; Boboltz, David; Böhm, Johannes; Böhm, Sigrid; Boer, Armin; Bolotin, Sergei; Bougeard, Mireille; Bourda, Geraldine; Buttaccio, Salvo; Cannizzaro, Letizia; Cappallo, Roger; Carlson, Brent; Carter, Merri Sue; Charlot, Patrick; Chen, Chenyu; Chen, Maozheng; Cho, Jungho; Clark, Thomas; Collioud, Arnaud; Colomer, Francisco; Colucci, Giuseppe; Combrinck, Ludwig; Conway, John; Corey, Brian; Curtis, Ronald; Dassing, Reiner; Davis, Maria; de-Vicente, Pablo; De Witt, Aletha; Diakov, Alexey; Dickey, John; Diegel, Irv; Doi, Koichiro; Drewes, Hermann; Dube, Maurice; Elgered, Gunnar; Engelhardt, Gerald; Evangelista, Mark; Fan, Qingyuan; Fedotov, Leonid; Fey, Alan; Figueroa, Ricardo; Fukuzaki, Yoshihiro; Gambis, Daniel; Garcia-Espada, Susana; Gaume, Ralph; Gaylard, Michael; Geiger, Nicole; Gipson, John; Gomez, Frank; Gomez-Gonzalez, Jesus; Gordon, David; Govind, Ramesh; Gubanov, Vadim; Gulyaev, Sergei; Haas, Ruediger; Hall, David; Halsig, Sebastian; Hammargren, Roger; Hase, Hayo; Heinkelmann, Robert; Helldner, Leif; Herrera, Cristian; Himwich, Ed; Hobiger, Thomas; Holst, Christoph; Hong, Xiaoyu; Honma, Mareki; Huang, Xinyong; Hugentobler, Urs; Ichikawa, Ryuichi; Iddink, Andreas; Ihde, Johannes; Ilijin, Gennadiy; Ipatov, Alexander; Ipatova, Irina; Ishihara, Misao; Ivanov, D. V.; Jacobs, Chris; Jike, Takaaki; Johansson, Karl-Ake; Johnson, Heidi; Johnston, Kenneth; Ju, Hyunhee; Karasawa, Masao; Kaufmann, Pierre; Kawabata, Ryoji; Kawaguchi, Noriyuki; Kawai, Eiji; Kaydanovsky, Michael; Kharinov, Mikhail; Kobayashi, Hideyuki; Kokado, Kensuke; Kondo, Tetsuro; Korkin, Edward; Koyama, Yasuhiro; Krasna, Hana; Kronschnabl, Gerhard; Kurdubov, Sergey; Kurihara, Shinobu; Kuroda, Jiro; Kwak, Younghee; La Porta, Laura; Labelle, Ruth; Lamb, Doug; Lambert, Sébastien; Langkaas, Line; Lanotte, Roberto; Lavrov, Alexey; Le Bail, Karine; Leek, Judith; Li, Bing; Li, Huihua; Li, Jinling; Liang, Shiguang; Lindqvist, Michael; Liu, Xiang; Loesler, Michael; Long, Jim; Lonsdale, Colin; Lovell, Jim; Lowe, Stephen; Lucena, Antonio; Luzum, Brian; Ma, Chopo; Ma, Jun; Maccaferri, Giuseppe; Machida, Morito; MacMillan, Dan; Madzak, Matthias; Malkin, Zinovy; Manabe, Seiji; Mantovani, Franco; Mardyshkin, Vyacheslav; Marshalov, Dmitry; Mathiassen, Geir; Matsuzaka, Shigeru; McCarthy, Dennis; Melnikov, Alexey; Michailov, Andrey; Miller, Natalia; Mitchell, Donald; Mora-Diaz, Julian Andres; Mueskens, Arno; Mukai, Yasuko; Nanni, Mauro; Natusch, Tim; Negusini, Monia; Neidhardt, Alexander; Nickola, Marisa; Nicolson, George; Niell, Arthur; Nikitin, Pavel; Nilsson, Tobias; Ning, Tong; Nishikawa, Takashi; Noll, Carey; Nozawa, Kentarou; Ogaja, Clement; Oh, Hongjong; Olofsson, Hans; Opseth, Per Erik; Orfei, Sandro; Pacione, Rosa; Pazamickas, Katherine; Petrachenko, William; Pettersson, Lars; Pino, Pedro; Plank, Lucia; Ploetz, Christian; Poirier, Michael; Poutanen, Markku; Qian, Zhihan; Quick, Jonathan; Rahimov, Ismail; Redmond, Jay; Reid, Brett; Reynolds, John; Richter, Bernd; Rioja, Maria; Romero-Wolf, Andres; Ruszczyk, Chester; Salnikov, Alexander; Sarti, Pierguido; Schatz, Raimund; Scherneck, Hans-Georg; Schiavone, Francesco; Schreiber, Ulrich; Schuh, Harald; Schwarz, Walter; Sciarretta, Cecilia; Searle, Anthony; Sekido, Mamoru; Seitz, Manuela; Shao, Minghui; Shibuya, Kazuo; Shu, Fengchun; Sieber, Moritz; Skjaeveland, Asmund; Skurikhina, Elena; Smolentsev, Sergey; Smythe, Dan; Sousa, Don; Sovers, Ojars; Stanford, Laura; Stanghellini, Carlo; Steppe, Alan; Strand, Rich; Sun, Jing; Surkis, Igor; Takashima, Kazuhiro; Takefuji, Kazuhiro; Takiguchi, Hiroshi; Tamura, Yoshiaki; Tanabe, Tadashi; Tanir, Emine; Tao, An; Tateyama, Claudio; Teke, Kamil; Thomas, Cynthia; Thorandt, Volkmar; Thornton, Bruce; Tierno Ros, Claudia; Titov, Oleg; Titus, Mike; Tomasi, Paolo; Tornatore, Vincenza; Trigilio, Corrado; Trofimov, Dmitriy; Tsutsumi, Masanori; Tuccari, Gino; Tzioumis, Tasso; Ujihara, Hideki; Ullrich, Dieter; Uunila, Minttu; Venturi, Tiziana; Vespe, Francesco; Vityazev, Veniamin; Volvach, Alexandr; Vytnov, Alexander; Wang, Guangli; Wang, Jinqing; Wang, Lingling; Wang, Na; Wang, Shiqiang; Wei, Wenren; Weston, Stuart; Whitney, Alan; Wojdziak, Reiner; Yatskiv, Yaroslav; Yang, Wenjun; Ye, Shuhua; Yi, Sangoh; Yusup, Aili; Zapata, Octavio; Zeitlhoefler, Reinhard; Zhang, Hua; Zhang, Ming; Zhang, Xiuzhong; Zhao, Rongbing; Zheng, Weimin; Zhou, Ruixian; Zubko, Nataliya

    2015-01-01

    Very Long Baseline Interferometry (VLBI) is a primary space-geodetic technique for determining precise coordinates on the Earth, for monitoring the variable Earth rotation and orientation with highest precision, and for deriving many other parameters of the Earth system. The International VLBI Service for Geodesy and Astrometry (IVS, http://ivscc.gsfc.nasa.gov/) is a service of the International Association of Geodesy (IAG) and the International Astronomical Union (IAU). The datasets published here are the results of individual Very Long Baseline Interferometry (VLBI) sessions in the form of normal equations in SINEX 2.0 format (http://www.iers.org/IERS/EN/Organization/AnalysisCoordinator/SinexFormat/sinex.html, the SINEX 2.0 description is attached as pdf) provided by IVS as the input for the next release of the International Terrestrial Reference System (ITRF): ITRF2014. This is a new version of the ITRF2008 release (Bockmann et al., 2009). For each session/ file, the normal equation systems contain elements for the coordinate components of all stations having participated in the respective session as well as for the Earth orientation parameters (x-pole, y-pole, UT1 and its time derivatives plus offset to the IAU2006 precession-nutation components dX, dY (https://www.iau.org/static/resolutions/IAU2006_Resol1.pdf). The terrestrial part is free of datum. The data sets are the result of a weighted combination of the input of several IVS Analysis Centers. The IVS contribution for ITRF2014 is described in Bachmann et al (2015), Schuh and Behrend (2012) provide a general overview on the VLBI method, details on the internal data handling can be found at Behrend (2013).

  5. Does gravity influence the visual line bisection task?

    PubMed

    Drakul, A; Bockisch, C J; Tarnutzer, A A

    2016-08-01

    The visual line bisection task (LBT) is sensitive to perceptual biases of visuospatial attention, showing slight leftward (for horizontal lines) and upward (for vertical lines) errors in healthy subjects. It may be solved in an egocentric or allocentric reference frame, and there is no obvious need for graviceptive input. However, for other visual line adjustments, such as the subjective visual vertical, otolith input is integrated. We hypothesized that graviceptive input is incorporated when performing the LBT and predicted reduced accuracy and precision when roll-tilted. Twenty healthy right-handed subjects repetitively bisected Earth-horizontal and body-horizontal lines in darkness. Recordings were obtained before, during, and after roll-tilt (±45°, ±90°) for 5 min each. Additionally, bisections of Earth-vertical and oblique lines were obtained in 17 subjects. When roll-tilted ±90° ear-down, bisections of Earth-horizontal (i.e., body-vertical) lines were shifted toward the direction of the head (P < 0.001). However, after correction for vertical line-bisection errors when upright, shifts disappeared. Bisecting body-horizontal lines while roll-tilted did not cause any shifts. The precision of Earth-horizontal line bisections decreased (P ≤ 0.006) when roll-tilted, while no such changes were observed for body-horizontal lines. Regardless of the trial condition and paradigm, the scanning direction of the bisecting cursor (leftward vs. rightward) significantly (P ≤ 0.021) affected line bisections. Our findings reject our hypothesis and suggest that gravity does not modulate the LBT. Roll-tilt-dependent shifts are instead explained by the headward bias when bisecting lines oriented along a body-vertical axis. Increased variability when roll-tilted likely reflects larger variability when bisecting body-vertical than body-horizontal lines. Copyright © 2016 the American Physiological Society.

  6. Probabilistic switching circuits in DNA

    PubMed Central

    Wilhelm, Daniel; Bruck, Jehoshua

    2018-01-01

    A natural feature of molecular systems is their inherent stochastic behavior. A fundamental challenge related to the programming of molecular information processing systems is to develop a circuit architecture that controls the stochastic states of individual molecular events. Here we present a systematic implementation of probabilistic switching circuits, using DNA strand displacement reactions. Exploiting the intrinsic stochasticity of molecular interactions, we developed a simple, unbiased DNA switch: An input signal strand binds to the switch and releases an output signal strand with probability one-half. Using this unbiased switch as a molecular building block, we designed DNA circuits that convert an input signal to an output signal with any desired probability. Further, this probability can be switched between 2n different values by simply varying the presence or absence of n distinct DNA molecules. We demonstrated several DNA circuits that have multiple layers and feedback, including a circuit that converts an input strand to an output strand with eight different probabilities, controlled by the combination of three DNA molecules. These circuits combine the advantages of digital and analog computation: They allow a small number of distinct input molecules to control a diverse signal range of output molecules, while keeping the inputs robust to noise and the outputs at precise values. Moreover, arbitrarily complex circuit behaviors can be implemented with just a single type of molecular building block. PMID:29339484

  7. Measuring circadian and acute light responses in mice using wheel running activity.

    PubMed

    LeGates, Tara A; Altimus, Cara M

    2011-02-04

    Circadian rhythms are physiological functions that cycle over a period of approximately 24 hours (circadian- circa: approximate and diem: day). They are responsible for timing our sleep/wake cycles and hormone secretion. Since this timing is not precisely 24-hours, it is synchronized to the solar day by light input. This is accomplished via photic input from the retina to the suprachiasmatic nucleus (SCN) which serves as the master pacemaker synchronizing peripheral clocks in other regions of the brain and peripheral tissues to the environmental light dark cycle. The alignment of rhythms to this environmental light dark cycle organizes particular physiological events to the correct temporal niche, which is crucial for survival. For example, mice sleep during the day and are active at night. This ability to consolidate activity to either the light or dark portion of the day is referred to as circadian photoentrainment and requires light input to the circadian clock. Activity of mice at night is robust particularly in the presence of a running wheel. Measuring this behavior is a minimally invasive method that can be used to evaluate the functionality of the circadian system as well as light input to this system. Methods that will covered here are used to examine the circadian clock, light input to this system, as well as the direct influence of light on wheel running behavior.

  8. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  9. Active disturbance rejection control based robust output feedback autopilot design for airbreathing hypersonic vehicles.

    PubMed

    Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong

    2018-03-01

    Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Geometrical quality evaluation in laser cutting of Inconel-718 sheet by using Taguchi based regression analysis and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shrivastava, Prashant Kumar; Pandey, Arun Kumar

    2018-03-01

    The Inconel-718 is one of the most demanding advanced engineering materials because of its superior quality. The conventional machining techniques are facing many problems to cut intricate profiles on these materials due to its minimum thermal conductivity, minimum elastic property and maximum chemical affinity at magnified temperature. The laser beam cutting is one of the advanced cutting method that may be used to achieve the geometrical accuracy with more precision by the suitable management of input process parameters. In this research work, the experimental investigation during the pulsed Nd:YAG laser cutting of Inconel-718 has been carried out. The experiments have been conducted by using the well planned orthogonal array L27. The experimentally measured values of different quality characteristics have been used for developing the second order regression models of bottom kerf deviation (KD), bottom kerf width (KW) and kerf taper (KT). The developed models of different quality characteristics have been utilized as a quality function for single-objective optimization by using particle swarm optimization (PSO) method. The optimum results obtained by the proposed hybrid methodology have been compared with experimental results. The comparison of optimized results with the experimental results shows that an individual improvement of 75%, 12.67% and 33.70% in bottom kerf deviation, bottom kerf width, and kerf taper has been observed. The parametric effects of different most significant input process parameters on quality characteristics have also been discussed.

  11. A Pilot Study of Contextual UMLS Indexing to Improve the Precision of Concept-based Representation in XML-structured Clinical Radiology Reports

    PubMed Central

    Huang, Yang; Lowe, Henry J.; Hersh, William R.

    2003-01-01

    Objective: Despite the advantages of structured data entry, much of the patient record is still stored as unstructured or semistructured narrative text. The issue of representing clinical document content remains problematic. The authors' prior work using an automated UMLS document indexing system has been encouraging but has been affected by the generally low indexing precision of such systems. In an effort to improve precision, the authors have developed a context-sensitive document indexing model to calculate the optimal subset of UMLS source vocabularies used to index each document section. This pilot study was performed to evaluate the utility of this indexing approach on a set of clinical radiology reports. Design: A set of clinical radiology reports that had been indexed manually using UMLS concept descriptors was indexed automatically by the SAPHIRE indexing engine. Using the data generated by this process the authors developed a system that simulated indexing, at the document section level, of the same document set using many permutations of a subset of the UMLS constituent vocabularies. Measurements: The precision and recall scores generated by simulated indexing for each permutation of two or three UMLS constituent vocabularies were determined. Results: While there was considerable variation in precision and recall values across the different subtypes of radiology reports, the overall effect of this indexing strategy using the best combination of two or three UMLS constituent vocabularies was an improvement in precision without significant impact of recall. Conclusion: In this pilot study a contextual indexing strategy improved overall precision in a set of clinical radiology reports. PMID:12925544

  12. A pilot study of contextual UMLS indexing to improve the precision of concept-based representation in XML-structured clinical radiology reports.

    PubMed

    Huang, Yang; Lowe, Henry J; Hersh, William R

    2003-01-01

    Despite the advantages of structured data entry, much of the patient record is still stored as unstructured or semistructured narrative text. The issue of representing clinical document content remains problematic. The authors' prior work using an automated UMLS document indexing system has been encouraging but has been affected by the generally low indexing precision of such systems. In an effort to improve precision, the authors have developed a context-sensitive document indexing model to calculate the optimal subset of UMLS source vocabularies used to index each document section. This pilot study was performed to evaluate the utility of this indexing approach on a set of clinical radiology reports. A set of clinical radiology reports that had been indexed manually using UMLS concept descriptors was indexed automatically by the SAPHIRE indexing engine. Using the data generated by this process the authors developed a system that simulated indexing, at the document section level, of the same document set using many permutations of a subset of the UMLS constituent vocabularies. The precision and recall scores generated by simulated indexing for each permutation of two or three UMLS constituent vocabularies were determined. While there was considerable variation in precision and recall values across the different subtypes of radiology reports, the overall effect of this indexing strategy using the best combination of two or three UMLS constituent vocabularies was an improvement in precision without significant impact of recall. In this pilot study a contextual indexing strategy improved overall precision in a set of clinical radiology reports.

  13. Overcoming gaps and bottlenecks to advance precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Maintaining a clear understanding of the technology gaps, knowledge needs, and training bottlenecks is required for improving adoption of precision agriculture. As an industry, precision agriculture embraces tools, methods, and practices that are constantly changing, requiring industry, education, a...

  14. Assembling Precise Truss Structures With Minimal Stresses

    NASA Technical Reports Server (NTRS)

    Sword, Lee F.

    1996-01-01

    Improved method of assembling precise truss structures involves use of simple devices. Tapered pins that fit in tapered holes indicate deviations from prescribed lengths. Method both helps to ensure precision of finished structures and minimizes residual stresses within structures.

  15. Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach.

    PubMed

    Lavdas, Ioannis; Glocker, Ben; Kamnitsas, Konstantinos; Rueckert, Daniel; Mair, Henrietta; Sandhu, Amandeep; Taylor, Stuart A; Aboagye, Eric O; Rockall, Andrea G

    2017-10-01

    As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0.22, RE = 0.70 ± 0.34, PR = 0.77 ± 0.15. Mean surface distance metrics for all the segmented structures were: CFs: ASD = 13.5 ± 11.3 mm, RMSSD = 34.6 ± 37.6 mm and HD = 185.7 ± 194.0 mm, CNNs; ASD = 5.48 ± 4.84 mm, RMSSD = 17.0 ± 13.3 mm and HD = 199.0 ± 101.2 mm, MA: ASD = 4.22 ± 2.42 mm, RMSSD = 6.13 ± 2.55 mm, and HD = 38.9 ± 28.9 mm. The pooled performance of CFs improved when all imaging combinations (T2w + T1w + DWI) were used as input, while the performance of CNNs deteriorated, but in neither case, significantly. CNNs with T2w images as input, performed significantly better than CFs with all imaging combinations as input for all anatomical labels, except for the bladder. Three state-of-the-art algorithms were developed and used to automatically segment major organs and bones in whole body MRI; good agreement to manual segmentations performed by clinical MRI experts was observed. CNNs perform favorably, when using T2w volumes as input. Using multimodal MRI data as input to CNNs did not improve the segmentation performance. © 2017 American Association of Physicists in Medicine.

  16. Syringe Injectable Electronics: Precise Targeted Delivery with Quantitative Input/Output Connectivity.

    PubMed

    Hong, Guosong; Fu, Tian-Ming; Zhou, Tao; Schuhmann, Thomas G; Huang, Jinlin; Lieber, Charles M

    2015-10-14

    Syringe-injectable mesh electronics with tissue-like mechanical properties and open macroporous structures is an emerging powerful paradigm for mapping and modulating brain activity. Indeed, the ultraflexible macroporous structure has exhibited unprecedented minimal/noninvasiveness and the promotion of attractive interactions with neurons in chronic studies. These same structural features also pose new challenges and opportunities for precise targeted delivery in specific brain regions and quantitative input/output (I/O) connectivity needed for reliable electrical measurements. Here, we describe new results that address in a flexible manner both of these points. First, we have developed a controlled injection approach that maintains the extended mesh structure during the "blind" injection process, while also achieving targeted delivery with ca. 20 μm spatial precision. Optical and microcomputed tomography results from injections into tissue-like hydrogel, ex vivo brain tissue, and in vivo brains validate our basic approach and demonstrate its generality. Second, we present a general strategy to achieve up to 100% multichannel I/O connectivity using an automated conductive ink printing methodology to connect the mesh electronics and a flexible flat cable, which serves as the standard "plug-in" interface to measurement electronics. Studies of resistance versus printed line width were used to identify optimal conditions, and moreover, frequency-dependent noise measurements show that the flexible printing process yields values comparable to commercial flip-chip bonding technology. Our results address two key challenges faced by syringe-injectable electronics and thereby pave the way for facile in vivo applications of injectable mesh electronics as a general and powerful tool for long-term mapping and modulation of brain activity in fundamental neuroscience through therapeutic biomedical studies.

  17. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  18. Laser Annealing on the Surface Treatment of Thin Super Elastic NiTi Wire

    NASA Astrophysics Data System (ADS)

    Samal, S.; Heller, L.; Brajer, J.; Tyc, O.; Kadrevek, L.; Sittner, P.

    2018-05-01

    Here the aim of this research is annealing the surface of NiTi wire for shape memory alloy, super-elastic wire by solid state laser beam. The laser surface treatment was carried out on the NiTi wire locally with fast, selective, surface heat treatment that enables precisely tune the localized material properties without any precipitation. Both as drawn (hard) and straight annealing NiTi wire were considered for laser annealing with input power 3 W, with precisely focusing the laser beam height 14.3 % of the Z-axis with a spot size of 1 mm. However, straight annealing wire is more interest due to its low temperature shape setting behavior and used by companies for stent materials. The variable parameter such as speed of the laser scanning and tensile stress on the NiTi wire were optimized to observe the effect of laser response on the sample. Superelastic, straight annealed NiTi wires (d: 0.10 mm) were held prestrained at the end of the superelastic plateau (ε: 5 ∼6.5 %) above the superelastic region by a tensile machine ( Mitter: miniature testing rig) at room temperature (RT). Simultaneously, the hardness of the wires along the cross-section was performed by nano-indentation (NI) method. The hardness of the NiTi wire corresponds to phase changes were correlated with NI test. The laser induced NiTi wire shows better fatigue performance with improved 6500 cycles.

  19. Quality Matters™: An Educational Input in an Ongoing Design-Based Research Project

    ERIC Educational Resources Information Center

    Adair, Deborah; Shattuck, Kay

    2015-01-01

    Quality Matters (QM) has been transforming established best practices and online education-based research into an applicable, scalable course level improvement process for the last decade. In this article, the authors describe QM as an ongoing design-based research project and an educational input for improving online education.

  20. Olfactory Bulb Deep Short-Axon Cells Mediate Widespread Inhibition of Tufted Cell Apical Dendrites

    PubMed Central

    LaRocca, Greg

    2017-01-01

    In the main olfactory bulb (MOB), the first station of sensory processing in the olfactory system, GABAergic interneuron signaling shapes principal neuron activity to regulate olfaction. However, a lack of known selective markers for MOB interneurons has strongly impeded cell-type-selective investigation of interneuron function. Here, we identify the first selective marker of glomerular layer-projecting deep short-axon cells (GL-dSACs) and investigate systematically the structure, abundance, intrinsic physiology, feedforward sensory input, neuromodulation, synaptic output, and functional role of GL-dSACs in the mouse MOB circuit. GL-dSACs are located in the internal plexiform layer, where they integrate centrifugal cholinergic input with highly convergent feedforward sensory input. GL-dSAC axons arborize extensively across the glomerular layer to provide highly divergent yet selective output onto interneurons and principal tufted cells. GL-dSACs are thus capable of shifting the balance of principal tufted versus mitral cell activity across large expanses of the MOB in response to diverse sensory and top-down neuromodulatory input. SIGNIFICANCE STATEMENT The identification of cell-type-selective molecular markers has fostered tremendous insight into how distinct interneurons shape sensory processing and behavior. In the main olfactory bulb (MOB), inhibitory circuits regulate the activity of principal cells precisely to drive olfactory-guided behavior. However, selective markers for MOB interneurons remain largely unknown, limiting mechanistic understanding of olfaction. Here, we identify the first selective marker of a novel population of deep short-axon cell interneurons with superficial axonal projections to the sensory input layer of the MOB. Using this marker, together with immunohistochemistry, acute slice electrophysiology, and optogenetic circuit mapping, we reveal that this novel interneuron population integrates centrifugal cholinergic input with broadly tuned feedforward sensory input to modulate principal cell activity selectively. PMID:28003347

  1. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  2. A Combined Hazard Index Fire Test Methodology for Aircraft Cabin Materials. Volume II.

    DTIC Science & Technology

    1982-04-01

    Technical Center. The report was divided into two parts: Part I described the improved technology investigated to upgrade existin methods for testing...proper implementation of the computerized data acquisition and reduction programs will improve materials hazards measurement precision. Thus, other...the hold chamber before and after injection of a sample, will improve precision and repeatability of measurement. The listed data acquisition and

  3. Cognition-Based Approaches for High-Precision Text Mining

    ERIC Educational Resources Information Center

    Shannon, George John

    2017-01-01

    This research improves the precision of information extraction from free-form text via the use of cognitive-based approaches to natural language processing (NLP). Cognitive-based approaches are an important, and relatively new, area of research in NLP and search, as well as linguistics. Cognitive approaches enable significant improvements in both…

  4. Precision Mass Measurements of Cd-131129 and Their Impact on Stellar Nucleosynthesis via the Rapid Neutron Capture Process

    NASA Astrophysics Data System (ADS)

    Atanasov, D.; Ascher, P.; Blaum, K.; Cakirli, R. B.; Cocolios, T. E.; George, S.; Goriely, S.; Herfurth, F.; Janka, H.-T.; Just, O.; Kowalska, M.; Kreim, S.; Kisler, D.; Litvinov, Yu. A.; Lunney, D.; Manea, V.; Neidherr, D.; Rosenbusch, M.; Schweikhard, L.; Welker, A.; Wienholtz, F.; Wolf, R. N.; Zuber, K.

    2015-12-01

    Masses adjacent to the classical waiting-point nuclide 130Cd have been measured by using the Penning-trap spectrometer ISOLTRAP at ISOLDE/CERN. We find a significant deviation of over 400 keV from earlier values evaluated by using nuclear beta-decay data. The new measurements show the reduction of the N =82 shell gap below the doubly magic 132Sn. The nucleosynthesis associated with the ejected wind from type-II supernovae as well as from compact object binary mergers is studied, by using state-of-the-art hydrodynamic simulations. We find a consistent and direct impact of the newly measured masses on the calculated abundances in the A =128 - 132 region and a reduction of the uncertainties from the precision mass input data.

  5. Attosecond-resolution Hong-Ou-Mandel interferometry.

    PubMed

    Lyons, Ashley; Knee, George C; Bolduc, Eliot; Roger, Thomas; Leach, Jonathan; Gauger, Erik M; Faccio, Daniele

    2018-05-01

    When two indistinguishable photons are each incident on separate input ports of a beamsplitter, they "bunch" deterministically, exiting via the same port as a direct consequence of their bosonic nature. This two-photon interference effect has long-held the potential for application in precision measurement of time delays, such as those induced by transparent specimens with unknown thickness profiles. However, the technique has never achieved resolutions significantly better than the few-femtosecond (micrometer) scale other than in a common-path geometry that severely limits applications. We develop the precision of Hong-Ou-Mandel interferometry toward the ultimate limits dictated by statistical estimation theory, achieving few-attosecond (or nanometer path length) scale resolutions in a dual-arm geometry, thus providing access to length scales pertinent to cell biology and monoatomic layer two-dimensional materials.

  6. Multichannel low power time-to-digital converter card with 21 ps precision and full scale range up to 10 μs

    NASA Astrophysics Data System (ADS)

    Tamborini, D.; Portaluppi, D.; Villa, F.; Tisa, S.; Tosi, A.

    2014-11-01

    We present a Time-to-Digital Converter (TDC) card with a compact form factor, suitable for multichannel timing instruments or for integration into more complex systems. The TDC Card provides 10 ps timing resolution over the whole measurement range, which is selectable from 160 ns up to 10 μs, reaching 21 ps rms precision, 1.25% LSB rms differential nonlinearity, up to 3 Mconversion/s with 400 mW power consumption. The I/O edge card connector provides timing data readout through either a parallel bus or a 100 MHz serial interface and further measurement information like input signal rate and valid conversion rate (typically useful for time-correlated single-photon counting application) through an independent serial link.

  7. Multichannel low power time-to-digital converter card with 21 ps precision and full scale range up to 10 μs.

    PubMed

    Tamborini, D; Portaluppi, D; Villa, F; Tisa, S; Tosi, A

    2014-11-01

    We present a Time-to-Digital Converter (TDC) card with a compact form factor, suitable for multichannel timing instruments or for integration into more complex systems. The TDC Card provides 10 ps timing resolution over the whole measurement range, which is selectable from 160 ns up to 10 μs, reaching 21 ps rms precision, 1.25% LSB rms differential nonlinearity, up to 3 Mconversion/s with 400 mW power consumption. The I/O edge card connector provides timing data readout through either a parallel bus or a 100 MHz serial interface and further measurement information like input signal rate and valid conversion rate (typically useful for time-correlated single-photon counting application) through an independent serial link.

  8. Sentinel-2A: Orbit Modelling Improvements and their Impact on the Orbit Prediction

    NASA Astrophysics Data System (ADS)

    Peter, Heike; Otten, Michiel; Fernández Sánchez, Jaime; Fernández Martín, Carlos; Féménias, Pierre

    2016-07-01

    Sentinel-2A is the second satellite of the European Copernicus Programme. The satellite has been launched on 23rd June 2015 and it is operational since mid October 2015. This optical mission carries a GPS receiver for precise orbit determination. The Copernicus POD (Precise Orbit Determination) Service is in charge of generating precise orbital products and auxiliary files for Sentinel-2A as well as for the Sentinel-1 and -3 missions. The accuracy requirements for the Sentinel-2A orbit products are not very stringent with 3 m in 3D (3 sigma) for the near real-time (NRT) orbit and 10 m in 2D (3 sigma) for the predicted orbit. The fulfilment of the orbit accuracy requirements is normally not an issue. The Copernicus POD Service aims, however, to provide the best possible orbits for all three Sentinel missions. Therefore, a sophisticated box-wing model is generated for the Sentinel-2 satellite as it is done for the other two missions as well. Additionally, the solar wing of the satellite is rewound during eclipse, which has to be modelled accordingly. The quality of the orbit prediction is dependent on the results of the orbit estimation performed before it. The values of the last estimation of each parameter is taken for the orbit propagation, i.e. estimating ten atmospheric drag coefficients per 24h, the value of the last coefficient is used as a fix parameter for the subsequent orbit prediction. The question is whether the prediction might be stabilised by, e.g. using an average value of all ten coefficients. This paper presents the status and the quality of the Sentinel-2 orbit determination in the operational environment of the Copernicus POD Service. The impact of the orbit model improvements on the NRT and predicted orbits is studied in detail. Changes in the orbit parametrization as well as in the settings for the orbit propagation are investigated. In addition, the impact of the quality of the input GPS orbit and clock product on the Sentinel-2A orbit prediction results is checked. The results of this study do not only improve the Sentinel-2 orbit products but will also support the generation of reliable orbit predictions for the Sentinel-3 mission. The Sentinel-3 satellite is equipped with a laser retro-reflector and reliable orbit predictions are, therefore, very important to guarantee a continuous support of the satellite laser tracking stations.

  9. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  10. Generalized Helicopter Rotor Performance Predictions

    DTIC Science & Technology

    1977-09-01

    34- V : ~ ~ t V ~ ’ . - - - - - - -- behavior . In order to use this routine, the user must input a negative number for the variable XITLIM, item 73...the values provided in Section E. It is realized that available data on airfoil behavior at large angles of attack are very limited, but so is the...where dynamic pressure is low, little precision is lost in performance calculation by using one common representation for most airfoil behavior . As a

  11. A Functional Examination of Intermediate Cognitive Processes.

    DTIC Science & Technology

    1986-01-01

    memory ) and a body of accumulated information referred to as " semantic memory ". In contrast with episodic memory (which most psychological experiments...referents of input signals. (p. 386) But as Eysenck (1984) points out, there is "no precise dividing line" betveen episodic and semantic memory and although...involve), Tulving suggests that semantic memory is the more crucial to 5. .P, 31 performing everyday tasks: It is a mental thesaurus, the

  12. DORIS/Jason-2: Better than 10 cm on-board orbits available for Near-Real-Time Altimetry

    NASA Astrophysics Data System (ADS)

    Jayles, C.; Chauveau, J. P.; Rozo, F.

    2010-12-01

    DIODE (DORIS Immediate Orbit on-board Determination) is a real-time on-board orbit determination software, embedded in the DORIS receiver. The purpose of this paper is to focus on DIODE performances. After a description of the recent DORIS evolutions, we detail how compliance with specifications are verified during extensive ground tests before the launch, then during the in-flight commissioning phase just after the launch, and how well they are met in the routine phase and today. Future improvements are also discussed for Jason-2 as well as for the next missions. The complete DORIS ground validation using DORIS simulator and new DORIS test equipments has shown prior to the Jason-2 flight that every functional requirement was fulfilled, and also that better than 10 cm real-time DIODE orbits would be achieved on-board Jason-2. The first year of Jason-2 confirmed this, and after correction of a slowly evolving polar motion error at the end of the commissioning phase, the DIODE on-board orbits are indeed better than the 10 cm specification: in the beginning of the routine phase, the discrepancy was already 7.7 cm Root-Mean-Square (RMS) in the radial component as compared to the final Precise Orbit Ephemerides (POE) orbit. Since the first day of Jason-2 cycle 1, the real-time DIODE orbits have been delivered in the altimetry fast delivery products. Their accuracy and their 100% availability make them a key input to fairly precise Near-Real-Time Altimetry processing. Time-tagging is at the microsecond level. In parallel, a few corrections (quaternion problem) and improvements have been gathered in an enhanced version of DIODE, which is already implemented and validated. With this new version, a 5 cm radial accuracy is achieved during ground validation over more than Jason-2 first year (cycles 1-43, from July 12th, 2008 to September 11th, 2009). The Seattle Ocean Surface Topography Science Team Meeting (OSTST) has recommended an upload of this v4.02 version on-board Jason-2 in order to take benefit from more accurate real-time orbits. For the future, perhaps the most important point of this work is that a 9 mm consistency is observed on-ground between simulated and adjusted orbits, proving that the DORIS measurement is very precisely and properly modelled in the DIODE navigation software. This implies that improvement of DIODE accuracy is still possible and should be driven by enhancement of the physical models: forces and perturbations of the satellite movement, Radio/Frequency phenomena perturbing measurements. A 2-cm accuracy is possible with future versions, if analysis and model improvements continue to progress.

  13. Early segregation of layered projections from the lateral superior olivary nucleusto the central nucleus of the inferior colliculus in the neonatal cat

    PubMed Central

    Gabriele, Mark L.; Shahmoradian, Sarah H.; French, Christopher C.; Henkel, Craig K.we; McHaffie, John G.

    2007-01-01

    The central nucleus of the inferior colliculus (IC) is a laminated structure that receives multiple converging afferent projections. These projections terminate in a layered arrangement and are aligned with dendritic arbors of the predominant disc-shaped neurons, forming fibrodendritic laminae. Within this structural framework, inputs terminate in a precise manner, establishing a mosaic of partially overlapping domains that likely define functional compartments. Although several of these patterned inputs have been described in the adult, relatively little is known about their organization prior to hearing onset. The present study used the lipophilic carbocyanine dyes DiI and DiD to examine the ipsilateral and contralateral projections from the lateral superior olivary (LSO) nucleus to the IC in a developmental series of paraformaldehyde-fixed kitten tissue. By birth, the crossed and uncrossed projections had reached the IC and were distributed across the frequency axis of the central nucleus. At this earliest postnatal stage, projections already exhibited a characteristic banded arrangement similar to that described in the adult. The heaviest terminal fields of the two inputs were always complementary in nature, with the ipsilateral input appearing slightly denser. This early arrangement of interdigitating ipsilateral and contralateral LSO axonal bands that occupy adjacent sublayers supports the idea that the initial establishment of this highly organized mosaic of inputs that defines distinct synaptic domains within the IC occurs largely in the absence of auditory experience. Potential developmental mechanisms that may shape these highly ordered inputs prior to hearing onset are discussed. PMID:17850770

  14. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  15. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  16. Use of controlled vocabularies to improve biomedical information retrieval tasks.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Vishnyakova, Dina; Ruch, Patrick; Lovis, Christian

    2013-01-01

    The high heterogeneity of biomedical vocabulary is a major obstacle for information retrieval in large biomedical collections. Therefore, using biomedical controlled vocabularies is crucial for managing these contents. We investigate the impact of query expansion based on controlled vocabularies to improve the effectiveness of two search engines. Our strategy relies on the enrichment of users' queries with additional terms, directly derived from such vocabularies applied to infectious diseases and chemical patents. We observed that query expansion based on pathogen names resulted in improvements of the top-precision of our first search engine, while the normalization of diseases degraded the top-precision. The expansion of chemical entities, which was performed on the second search engine, positively affected the mean average precision. We have shown that query expansion of some types of biomedical entities has a great potential to improve search effectiveness; therefore a fine-tuning of query expansion strategies could help improving the performances of search engines.

  17. Quantum dot SOA input power dynamic range improvement for differential-phase encoded signals.

    PubMed

    Vallaitis, T; Bonk, R; Guetlein, J; Hillerkuss, D; Li, J; Brenot, R; Lelarge, F; Duan, G H; Freude, W; Leuthold, J

    2010-03-15

    Experimentally we find a 10 dB input power dynamic range advantage for amplification of phase encoded signals with quantum dot SOA as compared to low-confinement bulk SOA. An analysis of amplitude and phase effects shows that this improvement can be attributed to the lower alpha-factor found in QD SOA.

  18. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  19. Evaluation of the Sentinel-3 Hydrologic Altimetry Processor prototypE (SHAPE) methods.

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Garcia-Mondéjar, A.; Bercher, N.; Fabry, P. L.; Roca, M.; Varona, E.; Fernandes, J.; Lazaro, C.; Vieira, T.; David, G.; Restano, M.; Ambrózio, A.

    2017-12-01

    Inland water scenes are highly variable, both in space and time, which leads to a much broader range of radar signatures than ocean surfaces. This applies to both LRM and "SAR" mode (SARM) altimetry. Nevertheless the enhanced along-track resolution of SARM altimeters should help improve the accuracy and precision of inland water height measurements from satellite. The SHAPE project - Sentinel-3 Hydrologic Altimetry Processor prototypE - which is funded by ESA through the Scientific Exploitation of Operational Missions Programme Element (contract number 4000115205/15/I-BG) aims at preparing for the exploitation of Sentinel-3 data over the inland water domain. The SHAPE Processor implements all of the steps necessary to derive rivers and lakes water levels and discharge from Delay-Doppler Altimetry and perform their validation against in situ data. The processor uses FBR CryoSat-2 and L1A Sentinel-3A data as input and also various ancillary data (proc. param., water masks, L2 corrections, etc.), to produce surface water levels. At a later stage, water level data are assimilated into hydrological models to derive river discharge. This poster presents the improvements obtained with the new methods and algorithms over the regions of interest (Amazon and Danube rivers, Vanern and Titicaca lakes).

  20. Fabrication of Silicon Backshorts with Improved Out-of-Band Rejection for Waveguide-Coupled Superconducting Detectors

    NASA Technical Reports Server (NTRS)

    Crowe, Erik J.; Bennett, Charles L.; Chuss, David T.; Denis, Kevin L.; Eimer, Joseph; Lourie, Nathan; Marriage, Tobias; Moseley, Samuel H.; Rostem, Karwan; Stevenson, Thomas R.; hide

    2012-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a ground-based instrument that will measure the polarization of the cosmic microqave background to search for gravitational waves form a posited epoch of inflation early in the universe's history. This measurement will require integration of superconducting transition-edge sensors with microwave waveguide inputs with good conrol of systematic errors, such as unwanted coupling to stray signals at frequencies outside of a precisely defined microwave band. To address these needs we will present work on the fabrication of silicon quarter-wave backshorts for the CLASS 40GHz focal plane. The 40GHz backshort consists of three degeneratively doped silicon wafers. Two spacer wafers are micromachined with through wafer vins to provide a 2.0mm long square waveguide. The third wafer acts as the backshort cap. The three wafers are bonded at the wafer level by Au-Au thermal compression bonding then aligned and flip chip bonded to the CLASS detector at the chip level. The micromachining techniques used have been optimized to create high aspect ratio waveguides, silicon pillars, and relief trenches with the goal of providing improved out of band signal rejection. We will discuss the fabrication of integrated CLASS superconducting detectors with silicon quarter wave backshorts and present current measurement results.

  1. Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.

    PubMed

    Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin

    2016-08-18

    Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.

  2. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  3. From technological advances to biological understanding: The main steps toward high-precision RT in breast cancer.

    PubMed

    Leonardi, Maria Cristina; Ricotti, Rosalinda; Dicuonzo, Samantha; Cattani, Federica; Morra, Anna; Dell'Acqua, Veronica; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja

    2016-10-01

    Radiotherapy improves local control in breast cancer (BC) patients which increases overall survival in the long term. Improvements in treatment planning and delivery and a greater understanding of BC behaviour have laid the groundwork for high-precision radiotherapy, which is bound to further improve the therapeutic index. Precise identification of target volumes, better coverage and dose homogeneity have had a positive impact on toxicity and local control. The conformity of treatment dose due to three-dimensional radiotherapy and new techniques such as intensity modulated radiotherapy makes it possible to spare surrounding normal tissue. The widespread use of dose-volume constraints and histograms have increased awareness of toxicity. Real time image guidance has improved geometric precision and accuracy, together with the implementation of quality assurance programs. Advances in the precision of radiotherapy is also based on the choice of the appropriate fractionation and approach. Adaptive radiotherapy is not only a technical concept, but is also a biological concept based on the knowledge that different types of BC have distinctive patterns of locoregional spread. A greater understanding of cancer biology helps in choosing the treatment best suited to a particular situation. Biomarkers predictive of response play a crucial role. The combination of radiotherapy with molecular targeted therapies may enhance radiosensitivity, thus increasing the cytotoxic effects and improving treatment response. The appropriateness of an alternative fractionation, partial breast irradiation, dose escalating/de-escalating approaches, the extent of nodal irradiation have been examined for all the BC subtypes. The broadened concept of adaptive radiotherapy is vital to high-precision treatments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Comparing conventional and computer-assisted surgery baseplate and screw placement in reverse shoulder arthroplasty.

    PubMed

    Venne, Gabriel; Rasquinha, Brian J; Pichora, David; Ellis, Randy E; Bicknell, Ryan

    2015-07-01

    Preoperative planning and intraoperative navigation technologies have each been shown separately to be beneficial for optimizing screw and baseplate positioning in reverse shoulder arthroplasty (RSA) but to date have not been combined. This study describes development of a system for performing computer-assisted RSA glenoid baseplate and screw placement, including preoperative planning, intraoperative navigation, and postoperative evaluation, and compares this system with a conventional approach. We used a custom-designed system allowing computed tomography (CT)-based preoperative planning, intraoperative navigation, and postoperative evaluation. Five orthopedic surgeons defined common preoperative plans on 3-dimensional CT reconstructed cadaveric shoulders. Each surgeon performed 3 computer-assisted and 3 conventional simulated procedures. The 3-dimensional CT reconstructed postoperative units were digitally matched to the preoperative model for evaluation of entry points, end points, and angulations of screws and baseplate. Values were used to find accuracy and precision of the 2 groups with respect to the defined placement. Statistical analysis was performed by t tests (α = .05). Comparison of the groups revealed no difference in accuracy or precision of screws or baseplate entry points (P > .05). Accuracy and precision were improved with use of navigation for end points and angulations of 3 screws (P < .05). Accuracy of the inferior screw showed a trend of improvement with navigation (P > .05). Navigated baseplate end point precision was improved (P < .05), with a trend toward improved accuracy (P > .05). We conclude that CT-based preoperative planning and intraoperative navigation allow improved accuracy and precision for screw placement and precision for baseplate positioning with respect to a predefined placement compared with conventional techniques in RSA. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  5. Defense Acquisitions: Assessments of Selected Weapon Programs

    DTIC Science & Technology

    2017-03-01

    PAC-3 MSE) 81 Warfighter Information Network-Tactical (WIN-T) Increment 2 83 Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires...Unmanned Air System 05/2018 —- O  Joint Surveillance Target Attack Radar System Recapitalization 10/2017 —- O  Improved Turbine Engine Program TBD...Network-Tactical (WIN-T) Increment 2 83 1-page assessments Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires (LRPF) 86

  6. High precision locating control system based on VCM for Talbot lithography

    NASA Astrophysics Data System (ADS)

    Yao, Jingwei; Zhao, Lixin; Deng, Qian; Hu, Song

    2016-10-01

    Aiming at the high precision and efficiency requirements of Z-direction locating in Talbot lithography, a control system based on Voice Coil Motor (VCM) was designed. In this paper, we built a math model of VCM and its moving characteristic was analyzed. A double-closed loop control strategy including position loop and current loop were accomplished. The current loop was implemented by driver, in order to achieve the rapid follow of the system current. The position loop was completed by the digital signal processor (DSP) and the position feedback was achieved by high precision linear scales. Feed forward control and position feedback Proportion Integration Differentiation (PID) control were applied in order to compensate for dynamic lag and improve the response speed of the system. And the high precision and efficiency of the system were verified by simulation and experiments. The results demonstrated that the performance of Z-direction gantry was obviously improved, having high precision, quick responses, strong real-time and easily to expend for higher precision.

  7. Facilitating mathematics learning for students with upper extremity disabilities using touch-input system.

    PubMed

    Choi, Kup-Sze; Chan, Tak-Yin

    2015-03-01

    To investigate the feasibility of using tablet device as user interface for students with upper extremity disabilities to input mathematics efficiently into computer. A touch-input system using tablet device as user interface was proposed to assist these students to write mathematics. User-switchable and context-specific keyboard layouts were designed to streamline the input process. The system could be integrated with conventional computer systems only with minor software setup. A two-week pre-post test study involving five participants was conducted to evaluate the performance of the system and collect user feedback. The mathematics input efficiency of the participants was found to improve during the experiment sessions. In particular, their performance in entering trigonometric expressions by using the touch-input system was significantly better than that by using conventional mathematics editing software with keyboard and mouse. The participants rated the touch-input system positively and were confident that they could operate at ease with more practice. The proposed touch-input system provides a convenient way for the students with hand impairment to write mathematics and has the potential to facilitate their mathematics learning. Implications for Rehabilitation Students with upper extremity disabilities often face barriers to learning mathematics which is largely based on handwriting. Conventional computer user interfaces are inefficient for them to input mathematics into computer. A touch-input system with context-specific and user-switchable keyboard layouts was designed to improve the efficiency of mathematics input. Experimental results and user feedback suggested that the system has the potential to facilitate mathematics learning for the students.

  8. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.

    PubMed

    Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando

    2017-08-01

    Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The added value of stochastic spatial disaggregation for short-term rainfall forecasts currently available in Canada

    NASA Astrophysics Data System (ADS)

    Gagnon, Patrick; Rousseau, Alain N.; Charron, Dominique; Fortin, Vincent; Audet, René

    2017-11-01

    Several businesses and industries rely on rainfall forecasts to support their day-to-day operations. To deal with the uncertainty associated with rainfall forecast, some meteorological organisations have developed products, such as ensemble forecasts. However, due to the intensive computational requirements of ensemble forecasts, the spatial resolution remains coarse. For example, Environment and Climate Change Canada's (ECCC) Global Ensemble Prediction System (GEPS) data is freely available on a 1-degree grid (about 100 km), while those of the so-called High Resolution Deterministic Prediction System (HRDPS) are available on a 2.5-km grid (about 40 times finer). Potential users are then left with the option of using either a high-resolution rainfall forecast without uncertainty estimation and/or an ensemble with a spectrum of plausible rainfall values, but at a coarser spatial scale. The objective of this study was to evaluate the added value of coupling the Gibbs Sampling Disaggregation Model (GSDM) with ECCC products to provide accurate, precise and consistent rainfall estimates at a fine spatial resolution (10-km) within a forecast framework (6-h). For 30, 6-h, rainfall events occurring within a 40,000-km2 area (Québec, Canada), results show that, using 100-km aggregated reference rainfall depths as input, statistics of the rainfall fields generated by GSDM were close to those of the 10-km reference field. However, in forecast mode, GSDM outcomes inherit of the ECCC forecast biases, resulting in a poor performance when GEPS data were used as input, mainly due to the inherent rainfall depth distribution of the latter product. Better performance was achieved when the Regional Deterministic Prediction System (RDPS), available on a 10-km grid and aggregated at 100-km, was used as input to GSDM. Nevertheless, most of the analyzed ensemble forecasts were weakly consistent. Some areas of improvement are identified herein.

  10. Improved prescribed performance control for air-breathing hypersonic vehicles with unknown deadzone input nonlinearity.

    PubMed

    Wang, Yingyang; Hu, Jianbo

    2018-05-19

    An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Dendritic GIRK Channels Gate the Integration Window, Plateau Potentials, and Induction of Synaptic Plasticity in Dorsal But Not Ventral CA1 Neurons

    PubMed Central

    2017-01-01

    Studies comparing neuronal activity at the dorsal and ventral poles of the hippocampus have shown that the scale of spatial information increases and the precision with which space is represented declines from the dorsal to ventral end. These dorsoventral differences in neuronal output and spatial representation could arise due to differences in computations performed by dorsal and ventral CA1 neurons. In this study, we tested this hypothesis by quantifying the differences in dendritic integration and synaptic plasticity between dorsal and ventral CA1 pyramidal neurons of rat hippocampus. Using a combination of somatic and dendritic patch-clamp recordings, we show that the threshold for LTP induction is higher in dorsal CA1 neurons and that a G-protein-coupled inward-rectifying potassium channel mediated regulation of dendritic plateau potentials and dendritic excitability underlies this gating. By contrast, similar regulation of LTP is absent in ventral CA1 neurons. Additionally, we show that generation of plateau potentials and LTP induction in dorsal CA1 neurons depends on the coincident activation of Schaffer collateral and temporoammonic inputs at the distal apical dendrites. The ventral CA1 dendrites, however, can generate plateau potentials in response to temporally dispersed excitatory inputs. Overall, our results highlight the dorsoventral differences in dendritic computation that could account for the dorsoventral differences in spatial representation. SIGNIFICANCE STATEMENT The dorsal and ventral parts of the hippocampus encode spatial information at very different scales. Whereas the place-specific firing fields are small and precise at the dorsal end of the hippocampus, neurons at the ventral end have comparatively larger place fields. Here, we show that the dorsal CA1 neurons have a higher threshold for LTP induction and require coincident timing of excitatory synaptic inputs for the generation of dendritic plateau potentials. By contrast, ventral CA1 neurons can integrate temporally dispersed inputs and have a lower threshold for LTP. Together, these dorsoventral differences in the threshold for LTP induction could account for the differences in scale of spatial representation at the dorsal and ventral ends of the hippocampus. PMID:28280255

  12. Dendritic GIRK Channels Gate the Integration Window, Plateau Potentials, and Induction of Synaptic Plasticity in Dorsal But Not Ventral CA1 Neurons.

    PubMed

    Malik, Ruchi; Johnston, Daniel

    2017-04-05

    Studies comparing neuronal activity at the dorsal and ventral poles of the hippocampus have shown that the scale of spatial information increases and the precision with which space is represented declines from the dorsal to ventral end. These dorsoventral differences in neuronal output and spatial representation could arise due to differences in computations performed by dorsal and ventral CA1 neurons. In this study, we tested this hypothesis by quantifying the differences in dendritic integration and synaptic plasticity between dorsal and ventral CA1 pyramidal neurons of rat hippocampus. Using a combination of somatic and dendritic patch-clamp recordings, we show that the threshold for LTP induction is higher in dorsal CA1 neurons and that a G-protein-coupled inward-rectifying potassium channel mediated regulation of dendritic plateau potentials and dendritic excitability underlies this gating. By contrast, similar regulation of LTP is absent in ventral CA1 neurons. Additionally, we show that generation of plateau potentials and LTP induction in dorsal CA1 neurons depends on the coincident activation of Schaffer collateral and temporoammonic inputs at the distal apical dendrites. The ventral CA1 dendrites, however, can generate plateau potentials in response to temporally dispersed excitatory inputs. Overall, our results highlight the dorsoventral differences in dendritic computation that could account for the dorsoventral differences in spatial representation. SIGNIFICANCE STATEMENT The dorsal and ventral parts of the hippocampus encode spatial information at very different scales. Whereas the place-specific firing fields are small and precise at the dorsal end of the hippocampus, neurons at the ventral end have comparatively larger place fields. Here, we show that the dorsal CA1 neurons have a higher threshold for LTP induction and require coincident timing of excitatory synaptic inputs for the generation of dendritic plateau potentials. By contrast, ventral CA1 neurons can integrate temporally dispersed inputs and have a lower threshold for LTP. Together, these dorsoventral differences in the threshold for LTP induction could account for the differences in scale of spatial representation at the dorsal and ventral ends of the hippocampus. Copyright © 2017 the authors 0270-6474/17/373940-16$15.00/0.

  13. Silent Expectations: Dynamic Causal Modeling of Cortical Prediction and Attention to Sounds That Weren't.

    PubMed

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Shtyrov, Yury; Bekinschtein, Tristan A; Henson, Richard

    2016-08-10

    There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called "mismatch response"). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an "omission" response). This situation arguably provides a more direct measure of "top-down" predictions in the absence of confounding "bottom-up" input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of "bottom-up" stimuli with the presence versus absence of "top-down" attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward "prediction" connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction. Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known "mismatch response." But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain's electromagnetic activity, we show that it also generates an "omission response" that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain's prediction of the future. Copyright © 2016 Chennu et al.

  14. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  15. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  16. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Using Covariates to Improve Precision for Studies that Randomize Schools to Evaluate Educational Interventions

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck

    2007-01-01

    This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…

  18. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  19. Improved meteorology from an updated WRF/CMAQ modeling ...

    EPA Pesticide Factsheets

    Realistic vegetation characteristics and phenology from the Moderate Resolution Imaging Spectroradiometer (MODIS) products improve the simulation for the meteorology and air quality modeling system WRF/CMAQ (Weather Research and Forecasting model and Community Multiscale Air Quality model) that employs the Pleim-Xiu land surface model (PX LSM). Recently, PX LSM WRF/CMAQ has been updated in vegetation, soil, and boundary layer processes resulting in improved 2 m temperature (T) and mixing ratio (Q), 10 m wind speed, and surface ozone simulations across the domain compared to the previous version for a period around August 2006. Yearlong meteorology simulations with the updated system demonstrate that MODIS input helps reduce bias of the 2 m Q estimation during the growing season from April to September. Improvements follow the green-up in the southeast from April and move toward the west and north through August. From October to March, MODIS input does not have much influence on the system because vegetation is not as active. The greatest effects of MODIS input include more accurate phenology, better representation of leaf area index (LAI) for various forest ecosystems and agricultural areas, and realistically sparse vegetation coverage in the western drylands. Despite the improved meteorology, MODIS input causes higher bias for the surface O3 simulation in April, August, and October in areas where MODIS LAI is much less than the base LAI. Thus, improvement

  20. Ex vivo dissection of optogenetically activated mPFC and hippocampal inputs to neurons in the basolateral amygdala: implications for fear and emotional memory

    PubMed Central

    Hübner, Cora; Bosch, Daniel; Gall, Andrea; Lüthi, Andreas; Ehrlich, Ingrid

    2014-01-01

    Many lines of evidence suggest that a reciprocally interconnected network comprising the amygdala, ventral hippocampus (vHC), and medial prefrontal cortex (mPFC) participates in different aspects of the acquisition and extinction of conditioned fear responses and fear behavior. This could at least in part be mediated by direct connections from mPFC or vHC to amygdala to control amygdala activity and output. However, currently the interactions between mPFC and vHC afferents and their specific targets in the amygdala are still poorly understood. Here, we use an ex-vivo optogenetic approach to dissect synaptic properties of inputs from mPFC and vHC to defined neuronal populations in the basal amygdala (BA), the area that we identify as a major target of these projections. We find that BA principal neurons (PNs) and local BA interneurons (INs) receive monosynaptic excitatory inputs from mPFC and vHC. In addition, both these inputs also recruit GABAergic feedforward inhibition in a substantial fraction of PNs, in some neurons this also comprises a slow GABAB-component. Amongst the innervated PNs we identify neurons that project back to subregions of the mPFC, indicating a loop between neurons in mPFC and BA, and a pathway from vHC to mPFC via BA. Interestingly, mPFC inputs also recruit feedforward inhibition in a fraction of INs, suggesting that these inputs can activate dis-inhibitory circuits in the BA. A general feature of both mPFC and vHC inputs to local INs is that excitatory inputs display faster rise and decay kinetics than in PNs, which would enable temporally precise signaling. However, mPFC and vHC inputs to both PNs and INs differ in their presynaptic release properties, in that vHC inputs are more depressing. In summary, our data describe novel wiring, and features of synaptic connections from mPFC and vHC to amygdala that could help to interpret functions of these interconnected brain areas at the network level. PMID:24634648

Top