Sample records for atcom automatically tuned

  1. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  2. The 1997 IDA Cost Research Symposium.

    DTIC Science & Technology

    1997-07-01

    Office/Organization Abbreviation Representative Office of the Director, Program Analysis and Evaluation Army Cost and Economic Analysis Center Naval...Robert Young NCCA Dr. Dan Nussbaum AFCAA COL Edward Weeks AMCRM Mr. Wayne Wesson ATAAC Mr. Russell F. Feury SSDC Ms. Carolyn S. Thompson ATCOM Mr...development by the RAND Corporation, an Army model developed by the Army Cost and Economic Analysis Center, and three models developed by the Institute for

  3. The Effects of Groundwater Samplers on Water Quality. A Literature Review

    DTIC Science & Technology

    1993-10-01

    Nacht 1983): borehole and sampler di- devices operate by applying negative pressure, ameter, sampling depth, ease of cleaning, initial or vacuum, at...come in contact with any atmospheric gases al. 1974, Barcelona et al. 1985) have shown that and are subject to only a slight negative pressure...and selenium. They felt the degassing was due to 20% for the three most volatile compounds at the partial vacuum exerted by the pump for lift, highest

  4. Design of Complex BPF with Automatic Digital Tuning Circuit for Low-IF Receivers

    NASA Astrophysics Data System (ADS)

    Kondo, Hideaki; Sawada, Masaru; Murakami, Norio; Masui, Shoichi

    This paper describes the architecture and implementations of an automatic digital tuning circuit for a complex bandpass filter (BPF) in a low-power and low-cost transceiver for applications such as personal authentication and wireless sensor network systems. The architectural design analysis demonstrates that an active RC filter in a low-IF architecture can be at least 47.7% smaller in area than a conventional gm-C filter; in addition, it features a simple implementation of an associated tuning circuit. The principle of simultaneous tuning of both the center frequency and bandwidth through calibration of a capacitor array is illustrated as based on an analysis of filter characteristics, and a scalable automatic digital tuning circuit with simple analog blocks and control logic having only 835 gates is introduced. The developed capacitor tuning technique can achieve a tuning error of less than ±3.5% and lower a peaking in the passband filter characteristics. An experimental complex BPF using 0.18µm CMOS technology can successfully reduce the tuning error from an initial value of -20% to less than ±2.5% after tuning. The filter block dimensions are 1.22mm × 1.01mm; and in measurement results of the developed complex BPF with the automatic digital tuning circuit, current consumption is 705µA and the image rejection ratio is 40.3dB. Complete evaluation of the BPF indicates that this technique can be applied to low-power, low-cost transceivers.

  5. Adaptive Self-Tuning Networks

    NASA Astrophysics Data System (ADS)

    Knox, H. A.; Draelos, T.; Young, C. J.; Lawry, B.; Chael, E. P.; Faust, A.; Peterson, M. G.

    2015-12-01

    The quality of automatic detections from seismic sensor networks depends on a large number of data processing parameters that interact in complex ways. The largely manual process of identifying effective parameters is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. Yet, achieving superior automatic detection of seismic events is closely related to these parameters. We present an automated sensor tuning (AST) system that learns near-optimal parameter settings for each event type using neuro-dynamic programming (reinforcement learning) trained with historic data. AST learns to test the raw signal against all event-settings and automatically self-tunes to an emerging event in real-time. The overall goal is to reduce the number of missed legitimate event detections and the number of false event detections. Reducing false alarms early in the seismic pipeline processing will have a significant impact on this goal. Applicable both for existing sensor performance boosting and new sensor deployment, this system provides an important new method to automatically tune complex remote sensing systems. Systems tuned in this way will achieve better performance than is currently possible by manual tuning, and with much less time and effort devoted to the tuning process. With ground truth on detections in seismic waveforms from a network of stations, we show that AST increases the probability of detection while decreasing false alarms.

  6. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  7. An automatically tuning intrusion detection system.

    PubMed

    Yu, Zhenwei; Tsai, Jeffrey J P; Weigert, Thomas

    2007-04-01

    An intrusion detection system (IDS) is a security layer used to detect ongoing intrusive activities in information systems. Traditionally, intrusion detection relies on extensive knowledge of security experts, in particular, on their familiarity with the computer system to be protected. To reduce this dependence, various data-mining and machine learning techniques have been deployed for intrusion detection. An IDS is usually working in a dynamically changing environment, which forces continuous tuning of the intrusion detection model, in order to maintain sufficient performance. The manual tuning process required by current systems depends on the system operators in working out the tuning solution and in integrating it into the detection model. In this paper, an automatically tuning IDS (ATIDS) is presented. The proposed system will automatically tune the detection model on-the-fly according to the feedback provided by the system operator when false predictions are encountered. The system is evaluated using the KDDCup'99 intrusion detection dataset. Experimental results show that the system achieves up to 35% improvement in terms of misclassification cost when compared with a system lacking the tuning feature. If only 10% false predictions are used to tune the model, the system still achieves about 30% improvement. Moreover, when tuning is not delayed too long, the system can achieve about 20% improvement, with only 1.3% of the false predictions used to tune the model. The results of the experiments show that a practical system can be built based on ATIDS: system operators can focus on verification of predictions with low confidence, as only those predictions determined to be false will be used to tune the detection model.

  8. Meta-Learning Approach for Automatic Parameter Tuning: A Case Study with Educational Datasets

    ERIC Educational Resources Information Center

    Molina, M. M.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes to the use of a meta-learning approach for automatic parameter tuning of a well-known decision tree algorithm by using past information about algorithm executions. Fourteen educational datasets were analysed using various combinations of parameter values to examine the effects of the parameter values on accuracy classification.…

  9. Improvement of the matching speed of AIMS for development of an automatic totally tuning system for hyperthermia treatment using a resonant cavity applicator.

    PubMed

    Shindo, Y; Kato, K; Tsuchiya, K; Hirashima, T; Suzuki, M

    2009-01-01

    In this paper, we discuss the improvement of the speed of AIMS (Automatic Impedance Matching System) to automatically make impedance matching for a re-entrant resonant cavity applicator for non-invasive deep brain tumors hyperthermia treatments. We have already discussed the effectiveness of the heating method using the AIMS, with experiments of heating agar phantoms. However, the operating time of AIMS was about 30 minutes. To develop the ATT System (Automatic Totally Tuning System) including the automatic frequency tuning system, we must improve this problem. Because, when using the ATTS, the AIMS is used repeatedly to find the resonant frequency. In order to improve the speed of impedance matching, we developed the new automatic impedance matching system program (AIMS2). In AIMS, the stepping motors were connected to the impedance matching unit's dials. These dials were turned to reduce the reflected power. AIMS consists of two phases: all range searching and detailed searching. We focused on the three factors affecting the operating speed and improved them. The first factor is the interval put between the turning of the motors and AD converter. The second factor is how the steps of the motor when operating all range searching. The third factor is the starting position of the motor when detail searching. We developed the simple ATT System (ATT-beta) based on the AIMS2. To evaluate the developed AIMS2 and ATT- beta, experiments with an agar phantom were performed. From these results, we found that the operating time of the AIMS2 is about 4 minutes, which was approximately 12% of AIMS. From ATT-beta results, it was shown that it is possible to tune frequency and automatically match impedance with the program based on the AIMS2.

  10. Development and validation of a blade-element mathematical model for the AH-64A Apache helicopter

    NASA Technical Reports Server (NTRS)

    Mansur, M. Hossein

    1995-01-01

    A high-fidelity blade-element mathematical model for the AH-64A Apache Advanced Attack Helicopter has been developed by the Aeroflightdynamics Directorate of the U.S. Army's Aviation and Troop Command (ATCOM) at Ames Research Center. The model is based on the McDonnell Douglas Helicopter Systems' (MDHS) Fly Real Time (FLYRT) model of the AH-64A (acquired under contract) which was modified in-house and augmented with a blade-element-type main-rotor module. This report describes, in detail, the development of the rotor module, and presents some results of an extensive validation effort.

  11. Automatic tuned MRI RF coil for multinuclear imaging of small animals at 3T.

    PubMed

    Muftuler, L Tugan; Gulsen, Gultekin; Sezen, Kumsal D; Nalcioglu, Orhan

    2002-03-01

    We have developed an MRI RF coil whose tuning can be adjusted automatically between 120 and 128 MHz for sequential spectroscopic imaging of hydrogen and fluorine nuclei at field strength 3 T. Variable capacitance (varactor) diodes were placed on each rung of an eight-leg low-pass birdcage coil to change the tuning frequency of the coil. The diode junction capacitance can be controlled by the amount of applied reverse bias voltage. Impedance matching was also done automatically by another pair of varactor diodes to obtain the maximum SNR at each frequency. The same bias voltage was applied to the tuning varactors on all rungs to avoid perturbations in the coil. A network analyzer was used to monitor matching and tuning of the coil. A Pentium PC controlled the analyzer through the GPIB bus. A code written in LABVIEW was used to communicate with the network analyzer and adjust the bias voltages of the varactors via D/A converters. Serially programmed D/A converter devices were used to apply the bias voltages to the varactors. Isolation amplifiers were used together with RF choke inductors to provide isolation between the RF coil and the DC bias lines. We acquired proton and fluorine images sequentially from a multicompartment phantom using the designed coil. Good matching and tuning were obtained at both resonance frequencies. The tuning and matching of the coil were changed from one resonance frequency to the other within 60 s. (c) 2002 Elsevier Science (USA).

  12. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization.

    PubMed

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam; Noraziah, A

    2017-01-01

    In this paper, an attempt is made to apply the African Buffalo Optimization (ABO) to tune the parameters of a PID controller for an effective Automatic Voltage Regulator (AVR). Existing metaheuristic tuning methods have been proven to be quite successful but there were observable areas that need improvements especially in terms of the system's gain overshoot and steady steady state errors. Using the ABO algorithm where each buffalo location in the herd is a candidate solution to the Proportional-Integral-Derivative parameters was very helpful in addressing these two areas of concern. The encouraging results obtained from the simulation of the PID Controller parameters-tuning using the ABO when compared with the performance of Genetic Algorithm PID (GA-PID), Particle-Swarm Optimization PID (PSO-PID), Ant Colony Optimization PID (ACO-PID), PID, Bacteria-Foraging Optimization PID (BFO-PID) etc makes ABO-PID a good addition to solving PID Controller tuning problems using metaheuristics.

  13. A self-tuning automatic voltage regulator designed for an industrial environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, D.; Hogg, B.W.; Swidenbank, E.

    Examination of the performance of fixed parameter controllers has resulted in the development of self-tuning strategies for excitation control of turbogenerator systems. In conjunction with the advanced control algorithms, sophisticated measurement techniques have previously been adopted on micromachine systems to provide generator terminal quantities. In power stations, however, a minimalist hardware arrangement would be selected leading to relatively simple measurement techniques. The performance of a range of self-tuning schemes is investigated on an industrial test-bed, employing a typical industrial hardware measurement system. Individual controllers are implemented on a standard digital automatic voltage regulator, as installed in power stations. This employsmore » a VME platform, and the self-tuning algorithms are introduced by linking to a transputer network. The AVR includes all normal features, such as field forcing, VAR limiting and overflux protection. Self-tuning controller performance is compared with that of a fixed gain digital AVR.« less

  14. In-hardware demonstration of model-independent adaptive tuning of noisy systems with arbitrary phase drift

    DOE PAGES

    Scheinker, Alexander; Baily, Scott; Young, Daniel; ...

    2014-08-01

    In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less

  15. A robust automatic phase correction method for signal dense spectra

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang

    2013-09-01

    A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.

  16. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  17. Adaptive Self Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Matthew; Draelos, Timothy; Knox, Hunter

    2017-05-02

    The AST software includes numeric methods to 1) adjust STA/LTA signal detector trigger level (TL) values and 2) filter detections for a network of sensors. AST adapts TL values to the current state of the environment by leveraging cooperation within a neighborhood of sensors. The key metric that guides the dynamic tuning is consistency of each sensor with its nearest neighbors: TL values are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The AST algorithm adapts in near real-time to changing conditions in an attempt tomore » automatically self-tune a signal detector to identify (detect) only signals from events of interest.« less

  18. Fine-Tuning Neural Patient Question Retrieval Model with Generative Adversarial Networks.

    PubMed

    Tang, Guoyu; Ni, Yuan; Wang, Keqiang; Yong, Qin

    2018-01-01

    The online patient question and answering (Q&A) system attracts an increasing amount of users in China. Patient will post their questions and wait for doctors' response. To avoid the lag time involved with the waiting and to reduce the workload on the doctors, a better method is to automatically retrieve the semantically equivalent question from the archive. We present a Generative Adversarial Networks (GAN) based approach to automatically retrieve patient question. We apply supervised deep learning based approaches to determine the similarity between patient questions. Then a GAN framework is used to fine-tune the pre-trained deep learning models. The experiment results show that fine-tuning by GAN can improve the performance.

  19. Hydrogen maser frequency standard computer model for automatic cavity tuning servo simulations

    NASA Technical Reports Server (NTRS)

    Potter, P. D.; Finnie, C.

    1978-01-01

    A computer model of the JPL hydrogen maser frequency standard was developed. This model allows frequency stability data to be generated, as a function of various maser parameters, many orders of magnitude faster than these data can be obtained by experimental test. In particular, the maser performance as a function of the various automatic tuning servo parameters may be readily determined. Areas of discussion include noise sources, first-order autotuner loop, second-order autotuner loop, and a comparison of the loops.

  20. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    PubMed

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.

  1. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert

    2013-04-20

    Enhancing the performance of SciDAC applications on petascale systems had high priority within DOE SC at the start of the second phase of the SciDAC program, SciDAC-2, as it continues to do so today. Achieving expected levels of performance on high-end computing (HEC) systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges, the University of Southern California?s Information Sciences Institute organized the Performance Engineering Research Institute (PERI). PERI implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineeringmore » of high profile applications. Within PERI, USC?s primary research activity was automatic tuning (autotuning) of scientific software. This activity was spurred by the strong user preference for automatic tools and was based on previous successful activities such as ATLAS, which automatically tuned components of the LAPACK linear algebra library, and other recent work on autotuning domain-specific libraries. Our other major component was application engagement, to which we devoted approximately 30% of our effort to work directly with SciDAC-2 applications. This report is a summary of the overall results of the USC PERI effort.« less

  2. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  3. Fuzzy Logic-Based Audio Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, M.

    2008-11-01

    Audio and audio-pattern recognition is becoming one of the most important technologies to automatically control embedded systems. Fuzzy logic may be the most important enabling methodology due to its ability to rapidly and economically model such application. An audio and audio-pattern recognition engine based on fuzzy logic has been developed for use in very low-cost and deeply embedded systems to automate human-to-machine and machine-to-machine interaction. This engine consists of simple digital signal-processing algorithms for feature extraction and normalization, and a set of pattern-recognition rules manually tuned or automatically tuned by a self-learning process.

  4. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  5. Application of genetic algorithms to tuning fuzzy control systems

    NASA Technical Reports Server (NTRS)

    Espy, Todd; Vombrack, Endre; Aldridge, Jack

    1993-01-01

    Real number genetic algorithms (GA) were applied for tuning fuzzy membership functions of three controller applications. The first application is our 'Fuzzy Pong' demonstration, a controller that controls a very responsive system. The performance of the automatically tuned membership functions exceeded that of manually tuned membership functions both when the algorithm started with randomly generated functions and with the best manually-tuned functions. The second GA tunes input membership functions to achieve a specified control surface. The third application is a practical one, a motor controller for a printed circuit manufacturing system. The GA alters the positions and overlaps of the membership functions to accomplish the tuning. The applications, the real number GA approach, the fitness function and population parameters, and the performance improvements achieved are discussed. Directions for further research in tuning input and output membership functions and in tuning fuzzy rules are described.

  6. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  7. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  8. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Mary

    2014-09-19

    Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon whichmore » the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.« less

  9. An automatic data system for vibration modal tuning and evaluation

    NASA Technical Reports Server (NTRS)

    Salyer, R. A.; Jung, E. J., Jr.; Huggins, S. L.; Stephens, B. L.

    1975-01-01

    A digitally based automatic modal tuning and analysis system developed to provide an operational capability beginning at 0.1 hertz is described. The elements of the system, which provides unique control features, maximum operator visibility, and rapid data reduction and documentation, are briefly described; and the operational flow is discussed to illustrate the full range of capabilities and the flexibility of application. The successful application of the system to a modal survey of the Skylab payload is described. Information about the Skylab test article, coincident-quadrature analysis of modal response data, orthogonality, and damping calculations is included in the appendixes. Recommendations for future application of the system are also made.

  10. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  11. Auto-tuning system for NMR probe with LabView

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Mateo, Olivia; Bernal, Oscar

    2013-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program is designed to analyze the detected power signal of an antenna near the NMR probe and use this analysis to automatically tune the sample coil to match the impedance of the spectrometer (50 Ω). The tuning capacitors of the probe are controlled by a stepper motor through a LabVIEW/computer interface. Our program calculates the area of the power signal as an indicator to control the motor so disconnecting the coil to tune it through a network analyzer is unnecessary. Work supported by NSF-DMR 1105380

  12. A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Hovland, P. D.

    2001-05-01

    Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.

  13. Automatic adjustment of astrochronologic correlations

    NASA Astrophysics Data System (ADS)

    Zeeden, Christian; Kaboth, Stefanie; Hilgen, Frederik; Laskar, Jacques

    2017-04-01

    Here we present an algorithm for the automated adjustment and optimisation of correlations between proxy data and an orbital tuning target (or similar datasets as e.g. ice models) for the R environment (R Development Core Team 2008), building on the 'astrochron' package (Meyers et al.2014). The basis of this approach is an initial tuning on orbital (precession, obliquity, eccentricity) scale. We use filters of orbital frequency ranges related to e.g. precession, obliquity or eccentricity of data and compare these filters to an ensemble of target data, which may consist of e.g. different combinations of obliquity and precession, different phases of precession and obliquity, a mix of orbital and other data (e.g. ice models), or different orbital solutions. This approach allows for the identification of an ideal mix of precession and obliquity to be used as tuning target. In addition, the uncertainty related to different tuning tie points (and also precession- and obliquity contributions of the tuning target) can easily be assessed. Our message is to suggest an initial tuning and then obtain a reproducible tuned time scale, avoiding arbitrary chosen tie points and replacing these by automatically chosen ones, representing filter maxima (or minima). We present and discuss the above outlined approach and apply it to artificial and geological data. Artificial data are assessed to find optimal filter settings; real datasets are used to demonstrate the possibilities of such an approach. References: Meyers, S.R. (2014). Astrochron: An R Package for Astrochronology. http://cran.r-project.org/package=astrochron R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.

  14. Light shift effects in the Rb-87 maser

    NASA Technical Reports Server (NTRS)

    Busca, G.; Tetu, M.; Vanier, J.

    1973-01-01

    Previous work has shown the possibility to overcome the dependence of the Rb-87 maser frequency on light intensity by tuning the cavity at a proper setting. The conditions for this setting, called the light-independent frequency setting (LIFS), are carefully investigated. The results presented prove the existence of the LIFS and provide a new criterion for an automatic cavity tuning of the Rb maser.

  15. The 5K70SK automatically tuned, high power, S-band klystron

    NASA Technical Reports Server (NTRS)

    Goldfinger, A.

    1977-01-01

    Primary objectives include delivery of 44 5K70SK klystron amplifier tubes and 26 remote tuner assemblies with spare parts kits. Results of a reliability demonstration on a klystron test cavity are discussed, along with reliability tests performed on a remote tuning unit. Production problems and one design modification are reported and discussed. Results of PAT and DVT are included.

  16. Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System

    NASA Astrophysics Data System (ADS)

    Bendjeghaba, Omar

    2014-01-01

    This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.

  17. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  18. SU-F-T-342: Dosimetric Constraint Prediction Guided Automatic Mulit-Objective Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, T; Zhou, L; Li, Y

    Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less

  19. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  20. Automatic Blocking Of QR and LU Factorizations for Locality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Q; Kennedy, K; You, H

    2004-03-26

    QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less

  1. Tuning of active vibration controllers for ACTEX by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kwak, Moon K.; Denoyer, Keith K.

    1999-06-01

    This paper is concerned with the optimal tuning of digitally programmable analog controllers on the ACTEX-1 smart structures flight experiment. The programmable controllers for each channel include a third order Strain Rate Feedback (SRF) controller, a fifth order SRF controller, a second order Positive Position Feedback (PPF) controller, and a fourth order PPF controller. Optimal manual tuning of several control parameters can be a difficult task even though the closed-loop control characteristics of each controller are well known. Hence, the automatic tuning of individual control parameters using Genetic Algorithms is proposed in this paper. The optimal control parameters of each control law are obtained by imposing a constraint on the closed-loop frequency response functions using the ACTEX mathematical model. The tuned control parameters are then uploaded to the ACTEX electronic control electronics and experiments on the active vibration control are carried out in space. The experimental results on ACTEX will be presented.

  2. A simulator evaluation of an automatic terminal approach system

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1983-01-01

    The automatic terminal approach system (ATAS) is a concept for improving the pilot/machine interface with cockpit automation. The ATAS can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane avionics, control the airplane's autopilot, and display status information to the pilot. A piloted simulation study was conducted to determine the feasibility of an ATAS, determine pilot acceptance, and examine pilot/ATAS interaction. Seven instrument-rated pilots each flew four instrument approaches with a base-line heading select autopilot mode. The ATAS runs resulted in lower flight technical error, lower pilot workload, and fewer blunders than with the baseline autopilot. The ATAS status display enabled the pilots to maintain situational awareness during the automatic approaches. The system was well accepted by the pilots.

  3. Automatic Spike Sorting Using Tuning Information

    PubMed Central

    Ventura, Valérie

    2011-01-01

    Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802

  4. Automatic spike sorting using tuning information.

    PubMed

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  5. Advances in Modal Analysis Using a Robust and Multiscale Method

    NASA Astrophysics Data System (ADS)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  6. Dream controller

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J

    2013-11-26

    A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.

  7. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  8. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  9. An automatic molecular beam microwave Fourier transform spectrometer

    NASA Astrophysics Data System (ADS)

    Andresen, U.; Dreizler, H.; Grabow, J.-U.; Stahl, W.

    1990-12-01

    The general setup of an automatic MB-MWFT spectrometer for use in the 4-18 GHz range and its software details are discussed. The experimental control and data handling are performed on a personal computer using an interactive program. The parameters of the MW source and the resonator are controlled via IEEE bus and several serial interface ports. The tuning and measuring processes are automated and the efficiency is increased if unknown spectra are to be scanned. As an example, the spectrum of carbonyl sulfide has been measured automatically. The spectrometer is superior to all other kinds of rotational spectroscopic methods in both speed and unambiguity.

  10. Synthesis of multi-loop automatic control systems by the nonlinear programming method

    NASA Astrophysics Data System (ADS)

    Voronin, A. V.; Emelyanova, T. A.

    2017-01-01

    The article deals with the problem of calculation of the multi-loop control systems optimal tuning parameters by numerical methods and nonlinear programming methods. For this purpose, in the paper the Optimization Toolbox of Matlab is used.

  11. Enhanced efficiency of solid-state NMR investigations of energy materials using an external automatic tuning/matching (eATM) robot.

    PubMed

    Pecher, Oliver; Halat, David M; Lee, Jeongjae; Liu, Zigeng; Griffith, Kent J; Braun, Marco; Grey, Clare P

    2017-02-01

    We have developed and explored an external automatic tuning/matching (eATM) robot that can be attached to commercial and/or home-built magic angle spinning (MAS) or static nuclear magnetic resonance (NMR) probeheads. Complete synchronization and automation with Bruker and Tecmag spectrometers is ensured via transistor-transistor-logic (TTL) signals. The eATM robot enables an automated "on-the-fly" re-calibration of the radio frequency (rf) carrier frequency, which is beneficial whenever tuning/matching of the resonance circuit is required, e.g. variable temperature (VT) NMR, spin-echo mapping (variable offset cumulative spectroscopy, VOCS) and/or in situ NMR experiments of batteries. This allows a significant increase in efficiency for NMR experiments outside regular working hours (e.g. overnight) and, furthermore, enables measurements of quadrupolar nuclei which would not be possible in reasonable timeframes due to excessively large spectral widths. Additionally, different tuning/matching capacitor (and/or coil) settings for desired frequencies (e.g. 7 Li and 31 P at 117 and 122MHz, respectively, at 7.05 T) can be saved and made directly accessible before automatic tuning/matching, thus enabling automated measurements of multiple nuclei for one sample with no manual adjustment required by the user. We have applied this new eATM approach in static and MAS spin-echo mapping NMR experiments in different magnetic fields on four energy storage materials, namely: (1) paramagnetic 7 Li and 31 P MAS NMR (without manual recalibration) of the Li-ion battery cathode material LiFePO 4 ; (2) paramagnetic 17 O VT-NMR of the solid oxide fuel cell cathode material La 2 NiO 4+δ ; (3) broadband 93 Nb static NMR of the Li-ion battery material BNb 2 O 5 ; and (4) broadband static 127 I NMR of a potential Li-air battery product LiIO 3 . In each case, insight into local atomic structure and dynamics arises primarily from the highly broadened (1-25MHz) NMR lineshapes that the eATM robot is uniquely suited to collect. These new developments in automation of NMR experiments are likely to advance the application of in and ex situ NMR investigations to an ever-increasing range of energy storage materials and systems. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Enhanced efficiency of solid-state NMR investigations of energy materials using an external automatic tuning/matching (eATM) robot

    NASA Astrophysics Data System (ADS)

    Pecher, Oliver; Halat, David M.; Lee, Jeongjae; Liu, Zigeng; Griffith, Kent J.; Braun, Marco; Grey, Clare P.

    2017-02-01

    We have developed and explored an external automatic tuning/matching (eATM) robot that can be attached to commercial and/or home-built magic angle spinning (MAS) or static nuclear magnetic resonance (NMR) probeheads. Complete synchronization and automation with Bruker and Tecmag spectrometers is ensured via transistor-transistor-logic (TTL) signals. The eATM robot enables an automated "on-the-fly" re-calibration of the radio frequency (rf) carrier frequency, which is beneficial whenever tuning/matching of the resonance circuit is required, e.g. variable temperature (VT) NMR, spin-echo mapping (variable offset cumulative spectroscopy, VOCS) and/or in situ NMR experiments of batteries. This allows a significant increase in efficiency for NMR experiments outside regular working hours (e.g. overnight) and, furthermore, enables measurements of quadrupolar nuclei which would not be possible in reasonable timeframes due to excessively large spectral widths. Additionally, different tuning/matching capacitor (and/or coil) settings for desired frequencies (e.g.7Li and 31P at 117 and 122 MHz, respectively, at 7.05 T) can be saved and made directly accessible before automatic tuning/matching, thus enabling automated measurements of multiple nuclei for one sample with no manual adjustment required by the user. We have applied this new eATM approach in static and MAS spin-echo mapping NMR experiments in different magnetic fields on four energy storage materials, namely: (1) paramagnetic 7Li and 31P MAS NMR (without manual recalibration) of the Li-ion battery cathode material LiFePO4; (2) paramagnetic 17O VT-NMR of the solid oxide fuel cell cathode material La2NiO4+δ; (3) broadband 93Nb static NMR of the Li-ion battery material BNb2O5; and (4) broadband static 127I NMR of a potential Li-air battery product LiIO3. In each case, insight into local atomic structure and dynamics arises primarily from the highly broadened (1-25 MHz) NMR lineshapes that the eATM robot is uniquely suited to collect. These new developments in automation of NMR experiments are likely to advance the application of in and ex situ NMR investigations to an ever-increasing range of energy storage materials and systems.

  13. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  14. Virtual Surveyor based Object Extraction from Airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Habib, Md. Ahsan

    Topographic feature detection of land cover from LiDAR data is important in various fields - city planning, disaster response and prevention, soil conservation, infrastructure or forestry. In recent years, feature classification, compliant with Object-Based Image Analysis (OBIA) methodology has been gaining traction in remote sensing and geographic information science (GIS). In OBIA, the LiDAR image is first divided into meaningful segments called object candidates. This results, in addition to spectral values, in a plethora of new information such as aggregated spectral pixel values, morphology, texture, context as well as topology. Traditional nonparametric segmentation methods rely on segmentations at different scales to produce a hierarchy of semantically significant objects. Properly tuned scale parameters are, therefore, imperative in these methods for successful subsequent classification. Recently, some progress has been made in the development of methods for tuning the parameters for automatic segmentation. However, researchers found that it is very difficult to automatically refine the tuning with respect to each object class present in the scene. Moreover, due to the relative complexity of real-world objects, the intra-class heterogeneity is very high, which leads to over-segmentation. Therefore, the method fails to deliver correctly many of the new segment features. In this dissertation, a new hierarchical 3D object segmentation algorithm called Automatic Virtual Surveyor based Object Extracted (AVSOE) is presented. AVSOE segments objects based on their distinct geometric concavity/convexity. This is achieved by strategically mapping the sloping surface, which connects the object to its background. Further analysis produces hierarchical decomposition of objects to its sub-objects at a single scale level. Extensive qualitative and qualitative results are presented to demonstrate the efficacy of this hierarchical segmentation approach.

  15. A model for tracking concentration of chemical compounds within a tank of an automatic film processor.

    PubMed

    Sobol, Wlad T

    2002-01-01

    A simple kinetic model that describes the time evolution of the chemical concentration of an arbitrary compound within the tank of an automatic film processor is presented. It provides insights into the kinetics of chemistry concentration inside the processor's tank; the results facilitate the tasks of processor tuning and quality control (QC). The model has successfully been used in several troubleshooting sessions of low-volume mammography processors for which maintaining consistent QC tracking was difficult due to fluctuations of bromide levels in the developer tank.

  16. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  17. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.

    2016-09-01

    PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.

  18. Automated Sensor Tuning for Seismic Event Detection at a Carbon Capture, Utilization, and Storage Site, Farnsworth Unit, Ochiltree County, Texas

    NASA Astrophysics Data System (ADS)

    Ziegler, A.; Balch, R. S.; Knox, H. A.; Van Wijk, J. W.; Draelos, T.; Peterson, M. G.

    2016-12-01

    We present results (e.g. seismic detections and STA/LTA detection parameters) from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Specifically, we evaluate data from a passive vertical monitoring array consisting of 16 levels of 3-component 15Hz geophones installed in the field and continuously recording since January 2014. This detection database is directly compared to ancillary data (i.e. wellbore pressure) to determine if there is any relationship between seismic observables and CO2 injection and pressure maintenance in the field. Of particular interest is detection of relatively low-amplitude signals constituting long-period long-duration (LPLD) events that may be associated with slow shear-slip analogous to low frequency tectonic tremor. While this category of seismic event provides great insight into dynamic behavior of the pressurized subsurface, it is inherently difficult to detect. To automatically detect seismic events using effective data processing parameters, an automated sensor tuning (AST) algorithm developed by Sandia National Laboratories is being utilized. AST exploits ideas from neuro-dynamic programming (reinforcement learning) to automatically self-tune and determine optimal detection parameter settings. AST adapts in near real-time to changing conditions and automatically self-tune a signal detector to identify (detect) only signals from events of interest, leading to a reduction in the number of missed legitimate event detections and the number of false event detections. Funding for this project is provided by the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) through the Southwest Regional Partnership on Carbon Sequestration (SWP) under Award No. DE-FC26-05NT42591. Additional support has been provided by site operator Chaparral Energy, L.L.C. and Schlumberger Carbon Services. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. A light Higgs boson would invite supersymmetry

    NASA Astrophysics Data System (ADS)

    Ellis, J.; Ross, D.

    2001-05-01

    If the Higgs boson weighs about 115 GeV, the effective potential of the Standard Model becomes unstable above a scale of about 106 GeV. This instability may be rectified only by new bosonic particles such as stop squarks. However, avoiding the instability requires fine-tuning of the model couplings, in particular if the theory is not to become non-perturbative before the Planck scale. Such fine-tuning is automatic in a supersymmetric model, but is lost if there are no higgsinos. A light Higgs boson would be prima facie evidence for supersymmetry in the top-quark and Higgs sectors.

  20. Design of a PID Controller for a PCR Micro Reactor

    ERIC Educational Resources Information Center

    Dinca, M. P.; Gheorghe, M.; Galvin, P.

    2009-01-01

    Proportional-integral-derivative (PID) controllers are widely used in process control, and consequently they are described in most of the textbooks on automatic control. However, rather than presenting the overall design process, the examples given in such textbooks are intended to illuminate specific focused aspects of selection, tuning and…

  1. Robot tracking system improvements and visual calibration of orbiter position for radiator inspection

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory

    1990-01-01

    The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.

  2. Development of an embedded atmospheric turbulence mitigation engine

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Bonnett, James; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Methods to reconstruct pictures from imagery degraded by atmospheric turbulence have been under development for decades. The techniques were initially developed for observing astronomical phenomena from the Earth's surface, but have more recently been modified for ground and air surveillance scenarios. Such applications can impose significant constraints on deployment options because they both increase the computational complexity of the algorithms themselves and often dictate a requirement for low size, weight, and power (SWaP) form factors. Consequently, embedded implementations must be developed that can perform the necessary computations on low-SWaP platforms. Fortunately, there is an emerging class of embedded processors driven by the mobile and ubiquitous computing industries. We have leveraged these processors to develop embedded versions of the core atmospheric correction engine found in our ATCOM software. In this paper, we will present our experience adapting our algorithms for embedded systems on a chip (SoCs), namely the NVIDIA Tegra that couples general-purpose ARM cores with their graphics processing unit (GPU) technology and the Xilinx Zynq which pairs similar ARM cores with their field-programmable gate array (FPGA) fabric.

  3. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  4. Innovations in e-Business: Can Government Contracting be Adapted to Use Crowdsourcing and Open Innovation?

    DTIC Science & Technology

    2010-09-01

    cards and software in almost any computer can communicate with each other seamlessly. The cable modem protocol is another example of competing...streaming: it means that special client software applications known as podcatchers (such as Apple Inc.’s iTunes or Nullsoft’s Winamp) can automatically

  5. Substituted-Letter and Transposed-Letter Effects in a Masked Priming Paradigm with French Developing Readers and Dyslexics

    ERIC Educational Resources Information Center

    Lete, Bernard; Fayol, Michel

    2013-01-01

    The aim of the study was to undertake a behavioral investigation of the development of automatic orthographic processing during reading acquisition in French. Following Castles and colleagues' 2007 study ("Journal of Experimental Child Psychology, 97," 165-182) and their lexical tuning hypothesis framework, substituted-letter and…

  6. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    DTIC Science & Technology

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  7. Commercial applications

    NASA Technical Reports Server (NTRS)

    Togai, Masaki

    1990-01-01

    Viewgraphs on commercial applications of fuzzy logic in Japan are presented. Topics covered include: suitable application area of fuzzy theory; characteristics of fuzzy control; fuzzy closed-loop controller; Mitsubishi heavy air conditioner; predictive fuzzy control; the Sendai subway system; automatic transmission; fuzzy logic-based command system for antilock braking system; fuzzy feed-forward controller; and fuzzy auto-tuning system.

  8. Quantum vacuum energy in general relativity

    NASA Astrophysics Data System (ADS)

    Henke, Christian

    2018-02-01

    The paper deals with the scale discrepancy between the observed vacuum energy in cosmology and the theoretical quantum vacuum energy (cosmological constant problem). Here, we demonstrate that Einstein's equation and an analogy to particle physics leads to the first physical justification of the so-called fine-tuning problem. This fine-tuning could be automatically satisfied with the variable cosmological term Λ (a)=Λ_0+Λ_1 a^{-(4-ɛ)}, 0 < ɛ ≪ 1, where a is the scale factor. As a side effect of our solution of the cosmological constant problem, the dynamical part of the cosmological term generates an attractive force and solves the missing mass problem of dark matter.

  9. Arduino Due based tool to facilitate in vivo two-photon excitation microscopy.

    PubMed

    Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele

    2016-04-01

    Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo.

  10. Recent advances in automatic alignment system for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Karl; Awwal, Abdul A. S.; Kalantar, Dan; Leach, Richard; Lowe-Webb, Roger; McGuigan, David; Miller Kamm, Vicki

    2011-03-01

    The automatic alignment system for the National Ignition Facility (NIF) is a large-scale parallel system that directs all 192 laser beams along the 300-m optical path to a 50-micron focus at target chamber in less than 50 minutes. The system automatically commands 9,000 stepping motors to adjust mirrors and other optics based upon images acquired from high-resolution digital cameras viewing beams at various locations. Forty-five control loops per beamline request image processing services running on a LINUX cluster to analyze these images of the beams and references, and automatically steer the beams toward the target. This paper discusses the upgrades to the NIF automatic alignment system to handle new alignment needs and evolving requirements as related to various types of experiments performed. As NIF becomes a continuously-operated system and more experiments are performed, performance monitoring is increasingly important for maintenance and commissioning work. Data, collected during operations, is analyzed for tuning of the laser and targeting maintenance work. Handling evolving alignment and maintenance needs is expected for the planned 30-year operational life of NIF.

  11. Effect of Weight on the Resonant Tuning of Energy Harvesting Devices Using Giant Magnetostrictive Materials.

    PubMed

    Mori, Kotaro; Horibe, Tadashi; Ishikawa, Shigekazu

    2018-04-10

    This study deals with the numerical and experimental study of the effect of weight on the resonant tuning and energy harvesting characteristics of energy harvesting devices using giant magnetostrictive materials. The energy harvesting device is made in a cantilever shape using a thin Terfenol-D layer, stainless steel (SUS) layer and a movable proof mass, among other things. In this study, two types of movable proof mass were prepared, and the device was designed to adjust its own resonant frequency automatically to match external vibration frequency in real time. Three-dimensional finite element analysis (FEA) was performed, and the resonant frequency, tip displacement, and output voltage in the devices were predicted and measured, and the simulation and experiment results were compared. The effects of the weight of the proof mass on self-tuning ability and time-varying behavior were then considered in particular.

  12. Model-independent particle accelerator tuning

    DOE PAGES

    Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry

    2013-10-21

    We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less

  13. Experimental test of an online ion-optics optimizer

    NASA Astrophysics Data System (ADS)

    Amthor, A. M.; Schillaci, Z. M.; Morrissey, D. J.; Portillo, M.; Schwarz, S.; Steiner, M.; Sumithrarachchi, Ch.

    2018-07-01

    A technique has been developed and tested to automatically adjust multiple electrostatic or magnetic multipoles on an ion optical beam line - according to a defined optimization algorithm - until an optimal tune is found. This approach simplifies the process of determining high-performance optical tunes, satisfying a given set of optical properties, for an ion optical system. The optimization approach is based on the particle swarm method and is entirely model independent, thus the success of the optimization does not depend on the accuracy of an extant ion optical model of the system to be optimized. Initial test runs of a first order optimization of a low-energy (<60 keV) all-electrostatic beamline at the NSCL show reliable convergence of nine quadrupole degrees of freedom to well-performing tunes within a reasonable number of trial solutions, roughly 500, with full beam optimization run times of roughly two hours. Improved tunes were found both for quasi-local optimizations and for quasi-global optimizations, indicating a good ability of the optimizer to find a solution with or without a well defined set of initial multipole settings.

  14. Minimization of betatron oscillations of electron beam injected into a time-varying lattice via extremum seeking

    DOE PAGES

    Scheinker, Alexander; Huang, Xiaobiao; Wu, Juhao

    2017-02-20

    Here, we report on a beam-based experiment performed at the SPEAR3 storage ring of the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory, in which a model-independent extremum-seeking optimization algorithm was utilized to minimize betatron oscillations in the presence of a time-varying kicker magnetic field, by automatically tuning the pulsewidth, voltage, and delay of two other kicker magnets, and the current of two skew quadrupole magnets, simultaneously, in order to optimize injection kick matching. Adaptive tuning was performed on eight parameters simultaneously. The scheme was able to continuously maintain the match of a five-magnet lattice while the fieldmore » strength of a kicker magnet was continuously varied at a rate much higher (±6% sinusoidal voltage change over 1.5 h) than typically experienced in operation. Lastly, the ability to quickly tune or compensate for time variation of coupled components, as demonstrated here, is very important for the more general, more difficult problem of global accelerator tuning to quickly switch between various experimental setups.« less

  15. Limitations of the Motivational Intensity Model of Attentional Tuning: Reply to Harmon-Jones, Gable, and Price (2011)

    ERIC Educational Resources Information Center

    Friedman, Ronald S.; Forster, Jens

    2011-01-01

    In an integrative review, we concluded that implicit affective cues--rudimentary stimuli associated with the onset of arousing positive or negative emotional states and/or with appraisals that the environment is benign or threatening--automatically moderate the scope of attention (Friedman & Forster, 2010). In their comment, Harmon-Jones, Gable,…

  16. Podcasting and the Long Tail

    ERIC Educational Resources Information Center

    Bull, Glen

    2005-01-01

    Podcasting allows distribution of audio files through an RSS feed. This permits users to subscribe to a series of podcasts that are automatically sent to their computer or MP3 player. The capability to receive podcasts is built into freely distributed software such as iPodder as well as the most recent version of iTunes, a free download. In this…

  17. AUTOMATIC FREQUENCY CONTROL SYSTEM

    DOEpatents

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  18. Arduino Due based tool to facilitate in vivo two-photon excitation microscopy

    PubMed Central

    Artoni, Pietro; Landi, Silvia; Sato, Sebastian Sulis; Luin, Stefano; Ratto, Gian Michele

    2016-01-01

    Two-photon excitation spectroscopy is a powerful technique for the characterization of the optical properties of genetically encoded and synthetic fluorescent molecules. Excitation spectroscopy requires tuning the wavelength of the Ti:sapphire laser while carefully monitoring the delivered power. To assist laser tuning and the control of delivered power, we developed an Arduino Due based tool for the automatic acquisition of high quality spectra. This tool is portable, fast, affordable and precise. It allowed studying the impact of scattering and of blood absorption on two-photon excitation light. In this way, we determined the wavelength-dependent deformation of excitation spectra occurring in deep tissues in vivo. PMID:27446677

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hao; Garzoglio, Gabriele; Ren, Shangping

    FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VMmore » launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.« less

  20. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Designed tools for analysis of lithography patterns and nanostructures

    NASA Astrophysics Data System (ADS)

    Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann

    2017-03-01

    We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.

  2. 99Tc atom counting by quadrupole ICP-MS. Optimisation of the instrumental response

    NASA Astrophysics Data System (ADS)

    Más, José L.; Garcia-León, Manuel; Bolívar, Juan P.

    2002-05-01

    In this paper, an extensive work is done on the specific tune of a conventional ICP-MS for 99Tc atom counting. For this, two methods have been used and compared: the partial variable control method and the 5D Simplex method. Instrumental limits of detection of 0.2 and 0.8 ppt, respectively, were obtained. They are noticeably lower than that found with the automatic tune method of the spectrometer, 47 ppt, which shows the need of a specific tune when very low levels of 99Tc have to be determined. A study is presented on the mass interferences for 99Tc. Our experiments show that the formation of polyatomic atoms or refractory oxides as well as 98Mo hydrides seem to be irrelevant for 99Tc atom counting. The opposite occurs with the presence of isobaric interferences, i.e. 99Ru, and the effect of abundance sensitivity, or low-mass resolution, which can modify the response at m/ z=99 to a non-negligible extent.

  3. A simulation evaluation of a pilot interface with an automatic terminal approach system

    NASA Technical Reports Server (NTRS)

    Hinton, David A.

    1987-01-01

    The pilot-machine interface with cockpit automation is a critical factor in achieving the benefits of automation and reducing pilot blunders. To improve this interface, an automatic terminal approach system (ATAS) was conceived that can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane radios and control an airplane autopilot and autothrottle. The emphasis in the ATAS concept is a reduction in pilot blunders and work load by improving the pilot-automation interface. A research prototype of an ATAS was developed and installed in the Langley General Aviation Simulator. A piloted simulation study of the ATAS concept showed fewer pilot blunders, but no significant change in work load, when compared with a baseline heading-select autopilot mode. With the baseline autopilot, pilot blunders tended to involve loss of navigational situational awareness or instrument misinterpretation. With the ATAS, pilot blunders tended to involve a lack of awareness of the current ATAS mode state or deficiencies in the pilots' mental model of how the system operated. The ATAS display provided adequate approach status data to maintain situational awareness.

  4. preAssemble: a tool for automatic sequencer trace data processing.

    PubMed

    Adzhubei, Alexei A; Laerdahl, Jon K; Vlasova, Anna V

    2006-01-17

    Trace or chromatogram files (raw data) are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling). This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages--Phred and Staden are used by preAssemble to perform sequence quality processing. The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace) data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence processing.

  5. Finite element modelling and updating of a lively footbridge: The complete process

    NASA Astrophysics Data System (ADS)

    Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul

    2007-03-01

    The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.

  6. Automatic detection of white-light flare kernels in SDO/HMI intensitygrams

    NASA Astrophysics Data System (ADS)

    Mravcová, Lucia; Švanda, Michal

    2017-11-01

    Solar flares with a broadband emission in the white-light range of the electromagnetic spectrum belong to most enigmatic phenomena on the Sun. The origin of the white-light emission is not entirely understood. We aim to systematically study the visible-light emission connected to solar flares in SDO/HMI observations. We developed a code for automatic detection of kernels of flares with HMI intensity brightenings and study properties of detected candidates. The code was tuned and tested and with a little effort, it could be applied to any suitable data set. By studying a few flare examples, we found indication that HMI intensity brightening might be an artefact of the simplified procedure used to compute HMI observables.

  7. Stiffness control of magnetorheological gels for adaptive tunable vibration absorber

    NASA Astrophysics Data System (ADS)

    Kim, Hyun Kee; Kim, Hye Shin; Kim, Young-Keun

    2017-01-01

    In this study, a stiffness feedback control system for magnetorheological (MR) gel—a smart material of variable stiffness—is proposed, toward the design of a tunable vibration absorber that can adaptively tune to a time varying disturbance in real time. A PID controller was designed to track the required stiffness of the MR gel by controlling the magnitude of the target external magnetic field pervading the MR gel. This paper proposes a novel magnetic field generator that could produce a variable magnetic field with low energy consumption. The performance of the MR gel stiffness control was validated through experiments that showed the MR gel absorber system could be automatically tuned from 56 Hz to 67 Hz under a field of 100 mT to minimize the vibration of the primary system.

  8. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  9. Development of testing and training simulator for CEDMCS in KSNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, C. H.; Park, C. Y.; Nam, J. I.

    2006-07-01

    This paper presents a newly developed testing and training simulator (TTS) for automatically diagnosing and tuning the Control Element Drive Mechanism Control System (CEDMCS). TTS includes a new automatic, diagnostic, method for logic control cards and a new tuning method for phase synchronous pulse cards. In Korea Standard Nuclear Power Plants (KSNP). reactor trips occasionally occur due to a damaged logic control card in CEDMCS. However, there is no pre-diagnostic tester available to detect a damaged card in CEDMCS before it causes a reactor trip. Even after the reactor trip occurs, it is difficult to find the damaged card. Tomore » find the damaged card. ICT is usually used. ICT is an automated, computer-controlled testing system with measurement capabilities for testing active and passive components, or clusters of components, on printed circuit boards (PCB) and/or assemblies. However, ICT cannot detect a time dependent fault correctly and requires removal of the waterproof mating to perform the test. Therefore, the additional procedure of re-coating the PCB card is required after the test. TTS for CEDMCS is designed based on real plant conditions, both electrically and mechanically. Therefore, the operator can operate the Control Element Drive Mechanism (CEDM), which is mounted on the closure head of the reactor vessel (RV) using the soft control panel in ITS, which duplicates the Main Control Board (MCB) in the Main Control Room (MCR). However, during the generation of electric power in a nuclear power plant, it is difficult to operate the CEDM so a CEDM and Control Element Assembly (CEA) mock-up facility was developed to simulate a real plant CEDM. ITS was used for diagnosing and tuning control logic cards in CEDMCS in the Ulchin Nuclear Power Plant No. 4 during the plant overhaul period. It exhibited good performance in detecting the damaged cards and tuning the phase synchronous pulse cards. In addition, TTS was useful in training the CEDMCS operator by supplying detail signal information from the logic cards. (authors)« less

  10. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  11. Automatic sleep stage classification using two-channel electro-oculography.

    PubMed

    Virkkala, Jussi; Hasan, Joel; Värri, Alpo; Himanen, Sari-Leena; Müller, Kiti

    2007-10-15

    An automatic method for the classification of wakefulness and sleep stages SREM, S1, S2 and SWS was developed based on our two previous studies. The method is based on a two-channel electro-oculography (EOG) referenced to the left mastoid (M1). Synchronous electroencephalographic (EEG) activity in S2 and SWS was detected by calculating cross-correlation and peak-to-peak amplitude difference in the 0.5-6 Hz band between the two EOG channels. An automatic slow eye-movement (SEM) estimation was used to indicate wakefulness, SREM and S1. Beta power 18-30 Hz and alpha power 8-12 Hz was also used for wakefulness detection. Synchronous 1.5-6 Hz EEG activity and absence of large eye movements was used for S1 separation from SREM. Simple smoothing rules were also applied. Sleep EEG, EOG and EMG were recorded from 265 subjects. The system was tuned using data from 132 training subjects and then applied to data from 131 validation subjects that were different to the training subjects. Cohen's Kappa between the visual and the developed new automatic scoring in separating 30s wakefulness, SREM, S1, S2 and SWS epochs was substantial 0.62 with epoch by epoch agreement of 72%. With automatic subject specific alpha thresholds for offline applications results improved to 0.63 and 73%. The automatic method can be further developed and applied for ambulatory sleep recordings by using only four disposable, self-adhesive and self-applicable electrodes.

  12. Neural Network-Based Self-Tuning PID Control for Underwater Vehicles

    PubMed Central

    Hernández-Alvarado, Rodrigo; García-Valdovinos, Luis Govinda; Salgado-Jiménez, Tomás; Gómez-Espinosa, Alfonso; Fonseca-Navarro, Fernando

    2016-01-01

    For decades, PID (Proportional + Integral + Derivative)-like controllers have been successfully used in academia and industry for many kinds of plants. This is thanks to its simplicity and suitable performance in linear or linearized plants, and under certain conditions, in nonlinear ones. A number of PID controller gains tuning approaches have been proposed in the literature in the last decades; most of them off-line techniques. However, in those cases wherein plants are subject to continuous parametric changes or external disturbances, online gains tuning is a desirable choice. This is the case of modular underwater ROVs (Remotely Operated Vehicles) where parameters (weight, buoyancy, added mass, among others) change according to the tool it is fitted with. In practice, some amount of time is dedicated to tune the PID gains of a ROV. Once the best set of gains has been achieved the ROV is ready to work. However, when the vehicle changes its tool or it is subject to ocean currents, its performance deteriorates since the fixed set of gains is no longer valid for the new conditions. Thus, an online PID gains tuning algorithm should be implemented to overcome this problem. In this paper, an auto-tune PID-like controller based on Neural Networks (NN) is proposed. The NN plays the role of automatically estimating the suitable set of PID gains that achieves stability of the system. The NN adjusts online the controller gains that attain the smaller position tracking error. Simulation results are given considering an underactuated 6 DOF (degrees of freedom) underwater ROV. Real time experiments on an underactuated mini ROV are conducted to show the effectiveness of the proposed scheme. PMID:27608018

  13. Neural Network-Based Self-Tuning PID Control for Underwater Vehicles.

    PubMed

    Hernández-Alvarado, Rodrigo; García-Valdovinos, Luis Govinda; Salgado-Jiménez, Tomás; Gómez-Espinosa, Alfonso; Fonseca-Navarro, Fernando

    2016-09-05

    For decades, PID (Proportional + Integral + Derivative)-like controllers have been successfully used in academia and industry for many kinds of plants. This is thanks to its simplicity and suitable performance in linear or linearized plants, and under certain conditions, in nonlinear ones. A number of PID controller gains tuning approaches have been proposed in the literature in the last decades; most of them off-line techniques. However, in those cases wherein plants are subject to continuous parametric changes or external disturbances, online gains tuning is a desirable choice. This is the case of modular underwater ROVs (Remotely Operated Vehicles) where parameters (weight, buoyancy, added mass, among others) change according to the tool it is fitted with. In practice, some amount of time is dedicated to tune the PID gains of a ROV. Once the best set of gains has been achieved the ROV is ready to work. However, when the vehicle changes its tool or it is subject to ocean currents, its performance deteriorates since the fixed set of gains is no longer valid for the new conditions. Thus, an online PID gains tuning algorithm should be implemented to overcome this problem. In this paper, an auto-tune PID-like controller based on Neural Networks (NN) is proposed. The NN plays the role of automatically estimating the suitable set of PID gains that achieves stability of the system. The NN adjusts online the controller gains that attain the smaller position tracking error. Simulation results are given considering an underactuated 6 DOF (degrees of freedom) underwater ROV. Real time experiments on an underactuated mini ROV are conducted to show the effectiveness of the proposed scheme.

  14. Development of an automatic frequency control system for an X-band (=9300 MHz) RF electron linear accelerator

    NASA Astrophysics Data System (ADS)

    Cha, Sungsu; Kim, Yujong; Lee, Byung Cheol; Park, Hyung Dal; Lee, Seung Hyun; Buaphad, Pikad

    2017-05-01

    KAERI is developing a 6 MeV X-band radio frequency (RF) electron linear accelerator for medical purposes. The proposed X-band accelerator consists of an e-gun, an accelerating structure, two solenoid magnets, two steering magnets, a magnetron, a modulator, and an automatic frequency control (AFC) system. The accelerating structure of the component consists of oxygen-free high-conductivity copper (OFHC). Therefore, the ambient temperature changes the volume, and the resonance frequency of the accelerating structure also changes. If the RF frequency of a 9300 MHz magnetron and the resonance frequency of the accelerating structure do not match, it can degrade the performance. That is, it will decrease the output power, lower the beam current, decrease the X-ray dose rate, increase the reflection power, and result in unstable operation of the accelerator. Accelerator operation should be possible at any time during all four seasons. To prevent humans from being exposed to radiation when it is operated, the accelerator should also be operable through remote monitoring and remote control. Therefore, the AFC system is designed to meet these requirements; it is configured based on the concept of a phase-locked loop (PLL) model, which includes an RF section, an intermediate frequency (IF) [1-3] section, and a local oscillator (LO) section. Some resonance frequency controllers use a DC motor, chain, and potentiometer to store the position and tune the frequency [4,5]. Our AFC system uses a step motor to tune the RF frequency of the magnetron. The maximum tuning turn number of our magnetron frequency tuning shaft is ten. Since the RF frequency of our magnetron is 9300±25 MHz, it gives 5 MHz (∵±25 MHz/10 turns → 50 MHz/10 turns =5 MHz/turn) frequency tuning per turn. The rotation angle of our step motor is 0.72° per step and the total step number per one rotation is 360°/0.72°=500 steps. Therefore, the tuning range per step is 10 kHz/step (=5 MHz per turn/500 steps per turn). The developed system is a more compact new resonance frequency control system. In addition, a frequency measuring part is included and it can measure the real-time resonance frequency from the magnetron. We have succeeded in the stable provisioning of RF power by recording the results of a 0.01% frequency deviation in the AFC during an RF test. Accordingly, in this paper, the detailed design, fabrication, and a high power test of the AFC system for the X-band linac are presented.

  15. OpenSQUID: A Flexible Open-Source Software Framework for the Control of SQUID Electronics

    DOE PAGES

    Jaeckel, Felix T.; Lafler, Randy J.; Boyd, S. T. P.

    2013-02-06

    We report commercially available computer-controlled SQUID electronics are usually delivered with software providing a basic user interface for adjustment of SQUID tuning parameters, such as bias current, flux offset, and feedback loop settings. However, in a research context it would often be useful to be able to modify this code and/or to have full control over all these parameters from researcher-written software. In the case of the STAR Cryoelectronics PCI/PFL family of SQUID control electronics, the supplied software contains modules for automatic tuning and noise characterization, but does not provide an interface for user code. On the other hand, themore » Magnicon SQUIDViewer software package includes a public application programming interface (API), but lacks auto-tuning and noise characterization features. To overcome these and other limitations, we are developing an "open-source" framework for controlling SQUID electronics which should provide maximal interoperability with user software, a unified user interface for electronics from different manufacturers, and a flexible platform for the rapid development of customized SQUID auto-tuning and other advanced features. Finally, we have completed a first implementation for the STAR Cryoelectronics hardware and have made the source code for this ongoing project available to the research community on SourceForge (http://opensquid.sourceforge.net) under the GNU public license.« less

  16. ANUBIS: artificial neuromodulation using a Bayesian inference system.

    PubMed

    Smith, Benjamin J H; Saaj, Chakravarthini M; Allouis, Elie

    2013-01-01

    Gain tuning is a crucial part of controller design and depends not only on an accurate understanding of the system in question, but also on the designer's ability to predict what disturbances and other perturbations the system will encounter throughout its operation. This letter presents ANUBIS (artificial neuromodulation using a Bayesian inference system), a novel biologically inspired technique for automatically tuning controller parameters in real time. ANUBIS is based on the Bayesian brain concept and modifies it by incorporating a model of the neuromodulatory system comprising four artificial neuromodulators. It has been applied to the controller of EchinoBot, a prototype walking rover for Martian exploration. ANUBIS has been implemented at three levels of the controller; gait generation, foot trajectory planning using Bézier curves, and foot trajectory tracking using a terminal sliding mode controller. We compare the results to a similar system that has been tuned using a multilayer perceptron. The use of Bayesian inference means that the system retains mathematical interpretability, unlike other intelligent tuning techniques, which use neural networks, fuzzy logic, or evolutionary algorithms. The simulation results show that ANUBIS provides significant improvements in efficiency and adaptability of the three controller components; it allows the robot to react to obstacles and uncertainties faster than the system tuned with the MLP, while maintaining stability and accuracy. As well as advancing rover autonomy, ANUBIS could also be applied to other situations where operating conditions are likely to change or cannot be accurately modeled in advance, such as process control. In addition, it demonstrates one way in which neuromodulation could fit into the Bayesian brain framework.

  17. The control of automatic imitation based on bottom-up and top-down cues to animacy: insights from brain and behavior.

    PubMed

    Klapper, André; Ramsey, Richard; Wigboldus, Daniël; Cross, Emily S

    2014-11-01

    Humans automatically imitate other people's actions during social interactions, building rapport and social closeness in the process. Although the behavioral consequences and neural correlates of imitation have been studied extensively, little is known about the neural mechanisms that control imitative tendencies. For example, the degree to which an agent is perceived as human-like influences automatic imitation, but it is not known how perception of animacy influences brain circuits that control imitation. In the current fMRI study, we examined how the perception and belief of animacy influence the control of automatic imitation. Using an imitation-inhibition paradigm that involves suppressing the tendency to imitate an observed action, we manipulated both bottom-up (visual input) and top-down (belief) cues to animacy. Results show divergent patterns of behavioral and neural responses. Behavioral analyses show that automatic imitation is equivalent when one or both cues to animacy are present but reduces when both are absent. By contrast, right TPJ showed sensitivity to the presence of both animacy cues. Thus, we demonstrate that right TPJ is biologically tuned to control imitative tendencies when the observed agent both looks like and is believed to be human. The results suggest that right TPJ may be involved in a specialized capacity to control automatic imitation of human agents, rather than a universal process of conflict management, which would be more consistent with generalist theories of imitative control. Evidence for specialized neural circuitry that "controls" imitation offers new insight into developmental disorders that involve atypical processing of social information, such as autism spectrum disorders.

  18. Automatic blocking of nested loops

    NASA Technical Reports Server (NTRS)

    Schreiber, Robert; Dongarra, Jack J.

    1990-01-01

    Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.

  19. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    PubMed

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  20. Fast, precise, and widely tunable frequency control of an optical parametric oscillator referenced to a frequency comb.

    PubMed

    Prehn, Alexander; Glöckner, Rosa; Rempe, Gerhard; Zeppenfeld, Martin

    2017-03-01

    Optical frequency combs (OFCs) provide a convenient reference for the frequency stabilization of continuous-wave lasers. We demonstrate a frequency control method relying on tracking over a wide range and stabilizing the beat note between the laser and the OFC. The approach combines fast frequency ramps on a millisecond timescale in the entire mode-hop free tuning range of the laser and precise stabilization to single frequencies. We apply it to a commercially available optical parametric oscillator (OPO) and demonstrate tuning over more than 60 GHz with a ramping speed up to 3 GHz/ms. Frequency ramps spanning 15 GHz are performed in less than 10 ms, with the OPO instantly relocked to the OFC after the ramp at any desired frequency. The developed control hardware and software are able to stabilize the OPO to sub-MHz precision and to perform sequences of fast frequency ramps automatically.

  1. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  2. CO2 DIAL system: construction, measurements, and future development

    NASA Astrophysics Data System (ADS)

    Vicenik, Jiri

    1999-07-01

    A miniature CO2 DIAL system has been constructed. Dimension of the system are 500 X 450 X 240 mm, its mass is only 28 kg. The system consists of two tunable TEA CO2 lasers, receiving optics, IR detector, signal processing electronics and single chip microcomputer with display. The lasers are tuned manually by means of micrometric screw and are capable to generate pulses on more than 50 CO2 laser lines. The output energy is 50 mJ. The system was tested using various toxic gases and simulants, mostly at range 300 m, most of the measurements were done using pyrodetector in the receiver. The system shows good sensitivity, but it exhibits substantial instability of zero concentration. In the next stage the work will be concentrated on use of high-sensitivity MCT detector in the receiver and implementation of automatic tuning of lasers to the system.

  3. Demonstration of multi-wavelength tunable fiber lasers based on a digital micromirror device processor.

    PubMed

    Ai, Qi; Chen, Xiao; Tian, Miao; Yan, Bin-bin; Zhang, Ying; Song, Fei-jun; Chen, Gen-xiang; Sang, Xin-zhu; Wang, Yi-quan; Xiao, Feng; Alameh, Kamal

    2015-02-01

    Based on a digital micromirror device (DMD) processor as the multi-wavelength narrow-band tunable filter, we demonstrate a multi-port tunable fiber laser through experiments. The key property of this laser is that any lasing wavelength channel from any arbitrary output port can be switched independently over the whole C-band, which is only driven by single DMD chip flexibly. All outputs display an excellent tuning capacity and high consistency in the whole C-band with a 0.02 nm linewidth, 0.055 nm wavelength tuning step, and side-mode suppression ratio greater than 60 dB. Due to the automatic power control and polarization design, the power uniformity of output lasers is less than 0.008 dB and the wavelength fluctuation is below 0.02 nm within 2 h at room temperature.

  4. Decoupling PI Controller Design for a Normal Conducting RF Cavity Using a Recursive LEVENBERG-MARQUARDT Algorithm

    NASA Astrophysics Data System (ADS)

    Kwon, Sung-il; Lynch, M.; Prokop, M.

    2005-02-01

    This paper addresses the system identification and the decoupling PI controller design for a normal conducting RF cavity. Based on the open-loop measurement data of an SNS DTL cavity, the open-loop system's bandwidths and loop time delays are estimated by using batched least square. With the identified system, a PI controller is designed in such a way that it suppresses the time varying klystron droop and decouples the In-phase and Quadrature of the cavity field. The Levenberg-Marquardt algorithm is applied for nonlinear least squares to obtain the optimal PI controller parameters. The tuned PI controller gains are downloaded to the low-level RF system by using channel access. The experiment of the closed-loop system is performed and the performance is investigated. The proposed tuning method is running automatically in real time interface between a host computer with controller hardware through ActiveX Channel Access.

  5. Instrumentation for measuring aircraft noise and sonic boom

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J. (Inventor)

    1976-01-01

    Improved instrumentation suitable for measuring aircraft noise and sonic booms is described. An electric current proportional to the sound pressure level at a condenser microphone is produced and transmitted over a cable and amplified by a zero drive amplifier. The converter consists of a local oscillator, a dual-gate field-effect transistor mixer, and a voltage regulator/impedance translator. The improvements include automatic tuning compensation against changes in static microphone capacitance and means for providing a remote electrical calibration capability.

  6. Can climate models be tuned to simulate the global mean absolute temperature correctly?

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Shi, Y.; Gong, W.

    2016-12-01

    The Inter-government Panel on Climate Change (IPCC) has already issued five assessment reports (ARs), which include the simulation of the past climate and the projection of the future climate under various scenarios. The participating models can simulate reasonably well the trend in global mean temperature change, especially of the last 150 years. However, there is a large, constant discrepancy in terms of global mean absolute temperature simulations over this period. This discrepancy remained in the same range between IPCC-AR4 and IPCC-AR5, which amounts to about 3oC between the coldest model and the warmest model. This discrepancy has great implications to the land processes, particularly the processes related to the cryosphere, and casts doubts over if land-atmosphere-ocean interactions are correctly considered in those models. This presentation aims to explore if this discrepancy can be reduced through model tuning. We present an automatic model calibration strategy to tune the parameters of a climate model so the simulated global mean absolute temperature would match the observed data over the last 150 years. An intermediate complexity model known as LOVECLIM is used in the study. This presentation will show the preliminary results.

  7. An electronically tuned wideband probehead for NQR spectroscopy in the VHF range

    NASA Astrophysics Data System (ADS)

    Scharfetter, Hermann

    2016-10-01

    Nuclear quadrupole resonance spectroscopy is an analytical method which allows to characterize materials which contain quadrupolar nuclei, i.e. nuclei with spin ⩾1. The measurement technology is similar to that of NMR except that no static magnetic field is necessary. In contrast to NMR, however, it is frequently necessary to scan spectra with a very large bandwidth with a span of several tens of % of the central frequency so as to localize unknown peaks. Standard NMR probeheads which are typically constructed as resonators must be tuned and matched to comparatively narrow bands and must thus be re-tuned and re-matched very frequently when scanning over a whole NQR spectrum. At low frequencies up to few MHz dedicated circuits without the need for tuning and matching have been developed, but many quadrupole nuclei have transitions in the VHF range between several tens of MHz up to several hundreds of MHz. Currently available commercial NQR probeheads employ stepper motors for setting mechanically tuneable capacitors in standard NMR resonators. These yield high quality factors (Q) and thus high SNR but are relatively large and clumsy and do not allow for fast frequency sweeps. This article presents a new concept for a NQR probehead which combines a previously published no-tune no-match wideband concept for the transmit (TX) pulse with an electronically tuneable receive (RX) part employing varactor diodes. The prototype coil provides a TX frequency range of 57 MHz with a center frequency of 97.5 MHz with a return loss of ⩽-15 dB. During RX the resonator is tuned and matched automatically to the right frequency via control voltages which are read out from a previously generated lookup table, thus providing high SNR. The control voltages which bias the varactors settle very fast and allow for hopping to the next frequency point in the spectrum within less than 100 μs. Experiments with a test sample of ZnBr2 proved the feasibility of the method.

  8. Supernatural MSSM

    NASA Astrophysics Data System (ADS)

    Du, Guangle; Li, Tianjun; Nanopoulos, D. V.; Raza, Shabbar

    2015-07-01

    We point out that the electroweak fine-tuning problem in the supersymmetric standard models (SSMs) is mainly due to the high energy definition of the fine-tuning measure. We propose supernatural supersymmetry which has an order one high energy fine-tuning measure automatically. The key point is that all the mass parameters in the SSMs arise from a single supersymmetry breaking parameter. In this paper, we show that there is no supersymmetry electroweak fine-tuning problem explicitly in the minimal SSM (MSSM) with no-scale supergravity and Giudice-Masiero mechanism. We demonstrate that the Z -boson mass, the supersymmetric Higgs mixing parameter μ at the unification scale, and the sparticle spectrum can be given as functions of the universal gaugino mass M1 /2. Because the light stau is the lightest supersymmetric particle (LSP) in the no-scale MSSM, to preserve R parity, we introduce a non-thermally generated axino as the LSP dark matter candidate. We estimate the lifetime of the light stau by calculating its two-body and three-body decays to the LSP axino for several values of axion decay constant fa, and find that the light stau has a lifetime ττ ˜1 in [10-4,100 ] s for an fa range [109,1012] GeV . We show that our next to the LSP stau solutions are consistent with all the current experimental constraints, including the sparticle mass bounds, B-physics bounds, Higgs mass, cosmological bounds, and the bounds on long-lived charged particles at the LHC.

  9. Imitative modeling automatic system Control of steam pressure in the main steam collector with the influence on the main Servomotor steam turbine

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.

    2017-11-01

    The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.

  10. Automatic vehicle location system

    NASA Technical Reports Server (NTRS)

    Hansen, G. R., Jr. (Inventor)

    1973-01-01

    An automatic vehicle detection system is disclosed, in which each vehicle whose location is to be detected carries active means which interact with passive elements at each location to be identified. The passive elements comprise a plurality of passive loops arranged in a sequence along the travel direction. Each of the loops is tuned to a chosen frequency so that the sequence of the frequencies defines the location code. As the vehicle traverses the sequence of the loops as it passes over each loop, signals only at the frequency of the loop being passed over are coupled from a vehicle transmitter to a vehicle receiver. The frequencies of the received signals in the receiver produce outputs which together represent a code of the traversed location. The code location is defined by a painted pattern which reflects light to a vehicle carried detector whose output is used to derive the code defined by the pattern.

  11. Adaptive Sensor Tuning for Seismic Event Detection in Environment with Electromagnetic Noise

    NASA Astrophysics Data System (ADS)

    Ziegler, Abra E.

    The goal of this research is to detect possible microseismic events at a carbon sequestration site. Data recorded on a continuous downhole microseismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project, were evaluated using machine learning and reinforcement learning techniques to determine their effectiveness at seismic event detection on a dataset with electromagnetic noise. The data were recorded from a passive vertical monitoring array consisting of 16 levels of 3-component 15 Hz geophones installed in the field and continuously recording since January 2014. Electromagnetic and other noise recorded on the array has significantly impacted the utility of the data and it was necessary to characterize and filter the noise in order to attempt event detection. Traditional detection methods using short-term average/long-term average (STA/LTA) algorithms were evaluated and determined to be ineffective because of changing noise levels. To improve the performance of event detection and automatically and dynamically detect seismic events using effective data processing parameters, an adaptive sensor tuning (AST) algorithm developed by Sandia National Laboratories was utilized. AST exploits neuro-dynamic programming (reinforcement learning) trained with historic event data to automatically self-tune and determine optimal detection parameter settings. The key metric that guides the AST algorithm is consistency of each sensor with its nearest neighbors: parameters are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The effects that changes in neighborhood configuration have on signal detection were explored, as it was determined that neighborhood-based detections significantly reduce the number of both missed and false detections in ground-truthed data. The performance of the AST algorithm was quantitatively evaluated during a variety of noise conditions and seismic detections were identified using AST and compared to ancillary injection data. During a period of CO2 injection in a nearby well to the monitoring array, 82% of seismic events were accurately detected, 13% of events were missed, and 5% of detections were determined to be false. Additionally, seismic risk was evaluated from the stress field and faulting regime at FWU to determine the likelihood of pressure perturbations to trigger slip on previously mapped faults. Faults oriented NW-SE were identified as requiring the smallest pore pressure changes to trigger slip and faults oriented N-S will also potentially be reactivated although this is less likely.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael Allen; Marker, Bryan

    This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.

  13. Self-Tuning Methods for Multiple-Controller Systems.

    DTIC Science & Technology

    1981-08-01

    the model. The plant is governed by y(t) + Aly (t-l) - B 0u(t-l) + e(t) - where -0.99101 8.80512 x 103 A1 i.. "+-0.80610 -0.77089 -0.89889 -4.59328 x 10...AC-19, No. 5, Oct. 1974, pp. 518-524. [8 Bar- Shalom , Y. and Tse, E., "Dual Effect, Certainty Equivalence and Separation in Stochastic Control," IEEE...Trans. on Automatic Control, Vol. AC-19, No. 5, Oct. 1974, pp. 494-500. [9) Bar- Shalom , Y. and Tse, E., "Concepts and Methods in Stochastic Control

  14. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  15. Substituted-letter and transposed-letter effects in a masked priming paradigm with French developing readers and dyslexics.

    PubMed

    Lété, Bernard; Fayol, Michel

    2013-01-01

    The aim of the study was to undertake a behavioral investigation of the development of automatic orthographic processing during reading acquisition in French. Following Castles and colleagues' 2007 study (Journal of Experimental Child Psychology, 97, 165-182) and their lexical tuning hypothesis framework, substituted-letter and transposed-letter primes were used in a masked priming paradigm with third graders, fifth graders, adults, and phonological dyslexics matched on reading level with the third graders. No priming effect was found in third graders. In adults, only a transposed-letter priming effect was found; there was no substituted-letter priming effect. Finally, fifth graders and dyslexics showed both substituted-letter and transposed-letter priming effects. Priming effects between the two groups were of the same magnitude after response time (RT) z-score transformation. Taken together, our results show that the pattern of priming effects found by Castles and colleagues in English normal readers emerges later in French normal readers. In other words, language orthographies seem to constrain the tuning of the orthographic system, with an opaque orthography producing faster tuning of orthographic processing than more transparent orthographies because of the high level of reliance on phonological decoding while learning to read. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Single Neuron Optimization as a Basis for Accurate Biophysical Modeling: The Case of Cerebellar Granule Cells.

    PubMed

    Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio

    2017-01-01

    In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.

  17. Unsupervised Transfer Learning via Multi-Scale Convolutional Sparse Coding for Biomedical Applications

    PubMed Central

    Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua

    2017-01-01

    The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148

  18. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  19. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  20. Atlas-Guided Segmentation of Vervet Monkey Brain MRI

    PubMed Central

    Fedorov, Andriy; Li, Xiaoxing; Pohl, Kilian M; Bouix, Sylvain; Styner, Martin; Addicott, Merideth; Wyatt, Chris; Daunais, James B; Wells, William M; Kikinis, Ron

    2011-01-01

    The vervet monkey is an important nonhuman primate model that allows the study of isolated environmental factors in a controlled environment. Analysis of monkey MRI often suffers from lower quality images compared with human MRI because clinical equipment is typically used to image the smaller monkey brain and higher spatial resolution is required. This, together with the anatomical differences of the monkey brains, complicates the use of neuroimage analysis pipelines tuned for human MRI analysis. In this paper we developed an open source image analysis framework based on the tools available within the 3D Slicer software to support a biological study that investigates the effect of chronic ethanol exposure on brain morphometry in a longitudinally followed population of male vervets. We first developed a computerized atlas of vervet monkey brain MRI, which was used to encode the typical appearance of the individual brain structures in MRI and their spatial distribution. The atlas was then used as a spatial prior during automatic segmentation to process two longitudinal scans per subject. Our evaluation confirms the consistency and reliability of the automatic segmentation. The comparison of atlas construction strategies reveals that the use of a population-specific atlas leads to improved accuracy of the segmentation for subcortical brain structures. The contribution of this work is twofold. First, we describe an image processing workflow specifically tuned towards the analysis of vervet MRI that consists solely of the open source software tools. Second, we develop a digital atlas of vervet monkey brain MRIs to enable similar studies that rely on the vervet model. PMID:22253661

  1. A versatile valving toolkit for automating fluidic operations in paper microfluidic devices.

    PubMed

    Toley, Bhushan J; Wang, Jessica A; Gupta, Mayuri; Buser, Joshua R; Lafleur, Lisa K; Lutz, Barry R; Fu, Elain; Yager, Paul

    2015-03-21

    Failure to utilize valving and automation techniques has restricted the complexity of fluidic operations that can be performed in paper microfluidic devices. We developed a toolkit of paper microfluidic valves and methods for automatic valve actuation using movable paper strips and fluid-triggered expanding elements. To the best of our knowledge, this is the first functional demonstration of this valving strategy in paper microfluidics. After introduction of fluids on devices, valves can actuate automatically after a) a certain period of time, or b) the passage of a certain volume of fluid. Timing of valve actuation can be tuned with greater than 8.5% accuracy by changing lengths of timing wicks, and we present timed on-valves, off-valves, and diversion (channel-switching) valves. The actuators require ~30 μl fluid to actuate and the time required to switch from one state to another ranges from ~5 s for short to ~50 s for longer wicks. For volume-metered actuation, the size of a metering pad can be adjusted to tune actuation volume, and we present two methods - both methods can achieve greater than 9% accuracy. Finally, we demonstrate the use of these valves in a device that conducts a multi-step assay for the detection of the malaria protein PfHRP2. Although slightly more complex than devices that do not have moving parts, this valving and automation toolkit considerably expands the capabilities of paper microfluidic devices. Components of this toolkit can be used to conduct arbitrarily complex, multi-step fluidic operations on paper-based devices, as demonstrated in the malaria assay device.

  2. A versatile valving toolkit for automating fluidic operations in paper microfluidic devices

    PubMed Central

    Toley, Bhushan J.; Wang, Jessica A.; Gupta, Mayuri; Buser, Joshua R.; Lafleur, Lisa K.; Lutz, Barry R.; Fu, Elain; Yager, Paul

    2015-01-01

    Failure to utilize valving and automation techniques has restricted the complexity of fluidic operations that can be performed in paper microfluidic devices. We developed a toolkit of paper microfluidic valves and methods for automatic valve actuation using movable paper strips and fluid-triggered expanding elements. To the best of our knowledge, this is the first functional demonstration of this valving strategy in paper microfluidics. After introduction of fluids on devices, valves can actuate automatically a) after a certain period of time, or b) after the passage of a certain volume of fluid. Timing of valve actuation can be tuned with greater than 8.5% accuracy by changing lengths of timing wicks, and we present timed on-valves, off-valves, and diversion (channel-switching) valves. The actuators require ~30 μl fluid to actuate and the time required to switch from one state to another ranges from ~5 s for short to ~50s for longer wicks. For volume-metered actuation, the size of a metering pad can be adjusted to tune actuation volume, and we present two methods – both methods can achieve greater than 9% accuracy. Finally, we demonstrate the use of these valves in a device that conducts a multi-step assay for the detection of the malaria protein PfHRP2. Although slightly more complex than devices that do not have moving parts, this valving and automation toolkit considerably expands the capabilities of paper microfluidic devices. Components of this toolkit can be used to conduct arbitrarily complex, multi-step fluidic operations on paper-based devices, as demonstrated in the malaria assay device. PMID:25606810

  3. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.

  4. Tuning of automatic exposure control strength in lumbar spine CT.

    PubMed

    D'Hondt, A; Cornil, A; Bohy, P; De Maertelaer, V; Gevenois, P A; Tack, D

    2014-05-01

    To investigate the impact of tuning the automatic exposure control (AEC) strength curve (specific to Care Dose 4D®; Siemens Healthcare, Forchheim, Germany) from "average" to "strong" on image quality, radiation dose and operator dependency during lumbar spine CT examinations. Two hospitals (H1, H2), both using the same scanners, were considered for two time periods (P1 and P2). During P1, the AEC curve was "average" and radiographers had to select one of two protocols according to the body mass index (BMI): "standard" if BMI <30.0 kg m(-2) (120 kV-330 mAs) or "large" if BMI >30.0 kg m(-2) (140 kV-280 mAs). During P2, the AEC curve was changed to "strong", and all acquisitions were obtained with one protocol (120 kV and 270 mAs). Image quality was scored and patients' diameters calculated for both periods. 497 examinations were analysed. There was no significant difference in mean diameters according to hospitals and periods (p > 0.801) and in quality scores between periods (p > 0.172). There was a significant difference between hospitals regarding how often the "large" protocol was assigned [13 (10%)/132 patients in H1 vs 37 (28%)/133 in H2] (p < 0.001). During P1, volume CT dose index (CTDIvol) was higher in H2 (+13%; p = 0.050). In both hospitals, CTDIvol was reduced between periods (-19.2% in H1 and -29.4% in H2; p < 0.001). An operator dependency in protocol selection, unexplained by patient diameters or highlighted by image quality scores, has been observed. Tuning the AEC curve from average to strong enables suppression of the operator dependency in protocol selection and related dose increase, while preserving image quality. CT acquisition protocols based on weight are responsible for biases in protocol selection. Using an appropriate AEC strength curve reduces the number of protocols to one. Operator dependency of protocol selection is thereby eliminated.

  5. Scanner OPC signatures: automatic vendor-to-vendor OPE matching

    NASA Astrophysics Data System (ADS)

    Renwick, Stephen P.

    2009-03-01

    As 193nm lithography continues to be stretched and the k1 factor decreases, optical proximity correction (OPC) has become a vital part of the lithographer's tool kit. Unfortunately, as is now well known, the design variations of lithographic scanners from different vendors cause them to have slightly different optical-proximity effect (OPE) behavior, meaning that they print features through pitch in distinct ways. This in turn means that their response to OPC is not the same, and that an OPC solution designed for a scanner from Company 1 may or may not work properly on a scanner from Company 2. Since OPC is not inexpensive, that causes trouble for chipmakers using more than one brand of scanner. Clearly a scanner-matching procedure is needed to meet this challenge. Previously, automatic matching has only been reported for scanners of different tool generations from the same manufacturer. In contrast, scanners from different companies have been matched using expert tuning and adjustment techniques, frequently requiring laborious test exposures. Automatic matching between scanners from Company 1 and Company 2 has remained an unsettled problem. We have recently solved this problem and introduce a novel method to perform the automatic matching. The success in meeting this challenge required three enabling factors. First, we recognized the strongest drivers of OPE mismatch and are thereby able to reduce the information needed about a tool from another supplier to that information readily available from all modern scanners. Second, we developed a means of reliably identifying the scanners' optical signatures, minimizing dependence on process parameters that can cloud the issue. Third, we carefully employed standard statistical techniques, checking for robustness of the algorithms used and maximizing efficiency. The result is an automatic software system that can predict an OPC matching solution for scanners from different suppliers without requiring expert intervention.

  6. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    NASA Technical Reports Server (NTRS)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  7. Atom based grain extraction and measurement of geometric properties

    NASA Astrophysics Data System (ADS)

    Martine La Boissonière, Gabriel; Choksi, Rustum

    2018-04-01

    We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.

  8. Development of Digital SLR Camera: PENTAX K-7

    NASA Astrophysics Data System (ADS)

    Kawauchi, Hiraku

    The DSLR "PENTAX K-7" comes with an easy-to-carry, minimal yet functional small form factor, a long inherited identities of the PENTAX brand. Nevertheless for its compact body, this camera has up-to-date enhanced fundamental features such as high-quality viewfinder, enhanced shutter mechanism, extended continuous shooting capabilities, reliable exposure control, and fine-tuned AF systems, as well as strings of newest technologies such as movie recording capability and automatic leveling function. The main focus of this article is to reveal the ideas behind the concept making of this product and its distinguished features.

  9. Is semantic priming (ir)rational? Insights from the speeded word fragment completion task.

    PubMed

    Heyman, Tom; Hutchison, Keith A; Storms, Gert

    2016-10-01

    Semantic priming, the phenomenon that a target is recognized faster if it is preceded by a semantically related prime, is a well-established effect. However, the mechanisms producing semantic priming are subject of debate. Several theories assume that the underlying processes are controllable and tuned to prime utility. In contrast, purely automatic processes, like automatic spreading activation, should be independent of the prime's usefulness. The present study sought to disentangle both accounts by creating a situation where prime processing is actually detrimental. Specifically, participants were asked to quickly complete word fragments with either the letter a or e (e.g., sh_ve to be completed as shave). Critical fragments were preceded by a prime that was either related (e.g., push) or unrelated (write) to a prohibited completion of the target (e.g., shove). In 2 experiments, we found a significant inhibitory priming effect, which is inconsistent with purely "rational" explanations of semantic priming. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Sex-Specific Automatic Responses to Infant Cries: TMS Reveals Greater Excitability in Females than Males in Motor Evoked Potentials

    PubMed Central

    Messina, Irene; Cattaneo, Luigi; Venuti, Paola; de Pisapia, Nicola; Serra, Mauro; Esposito, Gianluca; Rigo, Paola; Farneti, Alessandra; Bornstein, Marc H.

    2016-01-01

    Neuroimaging reveals that infant cries activate parts of the premotor cortical system. To validate this effect in a more direct way, we used event-related transcranial magnetic stimulation (TMS). Here, we investigated the presence and the time course of modulation of motor cortex excitability in young adults who listened to infant cries. Specifically, we recorded motor evoked potentials (MEPs) from the biceps brachii (BB) and interosseus dorsalis primus (ID1) muscles as produced by TMS delivered from 0 to 250 ms after sound onset in six steps of 50 ms in 10 females and 10 males. We observed an excitatory modulation of MEPs at 100 ms from the onset of infant cry specific to females and to the ID1 muscle. We regard this modulation as a response to natural cry sounds because it was attenuated to stimuli increasingly different from natural cry and absent in a separate group of females who listened to non-cry stimuli physically matched to natural infant cries. Furthermore, the 100-ms latency of this response is not compatible with a voluntary reaction to the stimulus but suggests an automatic, bottom-up audiomotor association. The brains of adult females appear to be tuned to respond to infant cries with automatic motor excitation. PMID:26779061

  11. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  12. Automaticity of phonological and semantic processing during visual word recognition.

    PubMed

    Pattamadilok, Chotiga; Chanoine, Valérie; Pallier, Christophe; Anton, Jean-Luc; Nazarian, Bruno; Belin, Pascal; Ziegler, Johannes C

    2017-04-01

    Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment. This network included posterior, temporal and prefrontal regions, which clearly suggests the involvement of orthographic, semantic and phonological/articulatory processing in all tasks. However, we also found interactions between task and stimulus visibility, which reflected the fact that the strength of the neural responses to written words in several high-level language areas varied across tasks. Together, our findings suggest that the involvement of phonological and semantic processing in reading is supported by two complementary mechanisms. First, an automatic mechanism that results from a task-independent spread of activation throughout a network in which orthography is linked to phonology and semantics. Second, a mechanism that further fine-tunes the sensitivity of high-level language areas to the sensory input in a task-dependent manner. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  14. The Effect of NUMA Tunings on CPU Performance

    NASA Astrophysics Data System (ADS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  15. Model-Free Machine Learning in Biomedicine: Feasibility Study in Type 1 Diabetes

    PubMed Central

    Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G.

    2016-01-01

    Although reinforcement learning (RL) is suitable for highly uncertain systems, the applicability of this class of algorithms to medical treatment may be limited by the patient variability which dictates individualised tuning for their usually multiple algorithmic parameters. This study explores the feasibility of RL in the framework of artificial pancreas development for type 1 diabetes (T1D). In this approach, an Actor-Critic (AC) learning algorithm is designed and developed for the optimisation of insulin infusion for personalised glucose regulation. AC optimises the daily basal insulin rate and insulin:carbohydrate ratio for each patient, on the basis of his/her measured glucose profile. Automatic, personalised tuning of AC is based on the estimation of information transfer (IT) from insulin to glucose signals. Insulin-to-glucose IT is linked to patient-specific characteristics related to total daily insulin needs and insulin sensitivity (SI). The AC algorithm is evaluated using an FDA-accepted T1D simulator on a large patient database under a complex meal protocol, meal uncertainty and diurnal SI variation. The results showed that 95.66% of time was spent in normoglycaemia in the presence of meal uncertainty and 93.02% when meal uncertainty and SI variation were simultaneously considered. The time spent in hypoglycaemia was 0.27% in both cases. The novel tuning method reduced the risk of severe hypoglycaemia, especially in patients with low SI. PMID:27441367

  16. Automatic Tuning Matching Cycler (ATMC) in situ NMR spectroscopy as a novel approach for real-time investigations of Li- and Na-ion batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecher, Oliver; Bayley, Paul M.; Liu, Hao

    We have developed and explored the use of a new Automatic Tuning Matching Cycler (ATMC) in situ NMR probe system to track the formation of intermediate phases and investigate electrolyte decomposition during electrochemical cycling of Li- and Na-ion batteries (LIBs and NIBs). The new approach addresses many of the issues arising during in situ NMR, e.g., significantly different shifts of the multi-component samples, changing sample conditions (such as the magnetic susceptibility and conductivity) during cycling, signal broadening due to paramagnetism as well as interferences between the NMR and external cycler circuit that might impair the experiments. We provide practical insightmore » into how to conduct ATMC in situ NMR experiments and discuss applications of the methodology to LiFePO4 (LFP) and Na3V2(PO4)2F3 cathodes as well as Na metal anodes. Automatic frequency sweep 7Li in situ NMR reveals significant changes of the strongly paramagnetic broadened LFP line shape in agreement with the structural changes due to delithiation. Additionally, 31P in situ NMR shows a full separation of the electrolyte and cathode NMR signals and is a key feature for a deeper understanding of the processes occurring during charge/discharge on the local atomic scale of NMR. 31P in situ NMR with “on-the-fly” re-calibrated, varying carrier frequencies on Na3V2(PO4)2F3 as a cathode in a NIB enabled the detection of different P signals within a huge frequency range of 4000 ppm. The experiments show a significant shift and changes in the number as well as intensities of 31P signals during desodiation/sodiation of the cathode. The in situ experiments reveal changes of local P environments that in part have not been seen in ex situ NMR investigations. Furthermore, we applied ATMC 23Na in situ NMR on symmetrical Na–Na cells during galvanostatic plating. An automatic adjustment of the NMR carrier frequency during the in situ experiment ensured on-resonance conditions for the Na metal and electrolyte peak, respectively. Thus, interleaved measurements with different optimal NMR set-ups for the metal and electrolyte, respectively, became possible. This allowed the formation of different Na metal species as well as a quantification of electrolyte consumption during the electrochemical experiment to be monitored. The new approach is likely to benefit a further understanding of Na-ion battery chemistries.« less

  17. Automatic Tuning Matching Cycler (ATMC) in situ NMR spectroscopy as a novel approach for real-time investigations of Li- and Na-ion batteries

    NASA Astrophysics Data System (ADS)

    Pecher, Oliver; Bayley, Paul M.; Liu, Hao; Liu, Zigeng; Trease, Nicole M.; Grey, Clare P.

    2016-04-01

    We have developed and explored the use of a new Automatic Tuning Matching Cycler (ATMC) in situ NMR probe system to track the formation of intermediate phases and investigate electrolyte decomposition during electrochemical cycling of Li- and Na-ion batteries (LIBs and NIBs). The new approach addresses many of the issues arising during in situ NMR, e.g., significantly different shifts of the multi-component samples, changing sample conditions (such as the magnetic susceptibility and conductivity) during cycling, signal broadening due to paramagnetism as well as interferences between the NMR and external cycler circuit that might impair the experiments. We provide practical insight into how to conduct ATMC in situ NMR experiments and discuss applications of the methodology to LiFePO4 (LFP) and Na3V2(PO4)2F3 cathodes as well as Na metal anodes. Automatic frequency sweep 7Li in situ NMR reveals significant changes of the strongly paramagnetic broadened LFP line shape in agreement with the structural changes due to delithiation. Additionally, 31P in situ NMR shows a full separation of the electrolyte and cathode NMR signals and is a key feature for a deeper understanding of the processes occurring during charge/discharge on the local atomic scale of NMR. 31P in situ NMR with "on-the-fly" re-calibrated, varying carrier frequencies on Na3V2(PO4)2F3 as a cathode in a NIB enabled the detection of different P signals within a huge frequency range of 4000 ppm. The experiments show a significant shift and changes in the number as well as intensities of 31P signals during desodiation/sodiation of the cathode. The in situ experiments reveal changes of local P environments that in part have not been seen in ex situ NMR investigations. Furthermore, we applied ATMC 23Na in situ NMR on symmetrical Na-Na cells during galvanostatic plating. An automatic adjustment of the NMR carrier frequency during the in situ experiment ensured on-resonance conditions for the Na metal and electrolyte peak, respectively. Thus, interleaved measurements with different optimal NMR set-ups for the metal and electrolyte, respectively, became possible. This allowed the formation of different Na metal species as well as a quantification of electrolyte consumption during the electrochemical experiment to be monitored. The new approach is likely to benefit a further understanding of Na-ion battery chemistries.

  18. Design and development of a compact lidar/DIAL system for aerial surveillance of urban areas

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Gelfusa, M.; Malizia, A.; Richetta, M.; Antonucci, A.; Ventura, P.; Murari, A.; Vega, J.

    2013-10-01

    Recently surveying large areas in an automatic way, for early detection of harmful chemical agents, has become a strategic objective of defence and public health organisations. The Lidar-Dial techniques are widely recognized as a cost-effective alternative to monitor large portions of the atmosphere but, up to now, they have been mainly deployed as ground based stations. The design reported in this paper concerns the development of a Lidar-Dial system compact enough to be carried by a small airplane and capable of detecting sudden releases in air of harmful and/or polluting substances. The proposed approach consists of continuous monitoring of the area under surveillance with a Lidar type measurement. Once a significant increase in the density of backscattering substances is revealed, it is intended to switch to the Dial technique to identify the released chemicals and to determine its concentration. In this paper, the design of the proposed system is described and the simulations carried out to determine its performances are reported. For the Lidar measurements, commercially available Nd- YAG laser sources have already been tested and their performances, in combination with avalanche photodiodes, have been experimentally verified to meet the required specifications. With regard to the DIAL measurements, new compact CO2 laser sources are being investigated. The most promising candidate presents an energy per pulse of about 50 mJ typical, sufficient for a range of at least 500m. The laser also provides the so called "agile tuning" option that allows to quickly tune the wavelength. To guarantee continuous, automatic surveying of large areas, innovative solutions are required for the data acquisition, self monitoring of the system and data analysis. The results of the design, the simulations and some preliminary tests illustrate the potential of the chosen, integrated approach.

  19. The Optimization of Trained and Untrained Image Classification Algorithms for Use on Large Spatial Datasets

    NASA Technical Reports Server (NTRS)

    Kocurek, Michael J.

    2005-01-01

    The HARVIST project seeks to automatically provide an accurate, interactive interface to predict crop yield over the entire United States. In order to accomplish this goal, large images must be quickly and automatically classified by crop type. Current trained and untrained classification algorithms, while accurate, are highly inefficient when operating on large datasets. This project sought to develop new variants of two standard trained and untrained classification algorithms that are optimized to take advantage of the spatial nature of image data. The first algorithm, harvist-cluster, utilizes divide-and-conquer techniques to precluster an image in the hopes of increasing overall clustering speed. The second algorithm, harvistSVM, utilizes support vector machines (SVMs), a type of trained classifier. It seeks to increase classification speed by applying a "meta-SVM" to a quick (but inaccurate) SVM to approximate a slower, yet more accurate, SVM. Speedups were achieved by tuning the algorithm to quickly identify when the quick SVM was incorrect, and then reclassifying low-confidence pixels as necessary. Comparing the classification speeds of both algorithms to known baselines showed a slight speedup for large values of k (the number of clusters) for harvist-cluster, and a significant speedup for harvistSVM. Future work aims to automate the parameter tuning process required for harvistSVM, and further improve classification accuracy and speed. Additionally, this research will move documents created in Canvas into ArcGIS. The launch of the Mars Reconnaissance Orbiter (MRO) will provide a wealth of image data such as global maps of Martian weather and high resolution global images of Mars. The ability to store this new data in a georeferenced format will support future Mars missions by providing data for landing site selection and the search for water on Mars.

  20. [Design of High Frequency Signal Detecting Circuit of Human Body Impedance Used for Ultrashort Wave Diathermy Apparatus].

    PubMed

    Fan, Xu; Wang, Yunguang; Cheng, Haiping; Chong, Xiaochen

    2016-02-01

    The present circuit was designed to apply to human tissue impedance tuning and matching device in ultra-short wave treatment equipment. In order to judge if the optimum status of circuit parameter between energy emitter circuit and accepter circuit is in well syntony, we designed a high frequency envelope detect circuit to coordinate with automatic adjust device of accepter circuit, which would achieve the function of human tissue impedance matching and tuning. Using the sampling coil to receive the signal of amplitude-modulated wave, we compared the voltage signal of envelope detect circuit with electric current of energy emitter circuit. The result of experimental study was that the signal, which was transformed by the envelope detect circuit, was stable and could be recognized by low speed Analog to Digital Converter (ADC) and was proportional to the electric current signal of energy emitter circuit. It could be concluded that the voltage, transformed by envelope detect circuit can mirror the real circuit state of syntony and realize the function of human tissue impedance collecting.

  1. Advanced Fire Detector for Space Applications

    NASA Technical Reports Server (NTRS)

    Kutzner, Joerg

    2012-01-01

    A document discusses an optical carbon monoxide sensor for early fire detection. During the sensor development, a concept was implemented to allow reliable carbon monoxide detection in the presence of interfering absorption signals. Methane interference is present in the operating wavelength range of the developed prototype sensor for carbon monoxide detection. The operating parameters of the prototype sensor have been optimized so that interference with methane is minimized. In addition, simultaneous measurement of methane is implemented, and the instrument automatically corrects the carbon monoxide signal at high methane concentrations. This is possible because VCSELs (vertical cavity surface emitting lasers) with extended current tuning capabilities are implemented in the optical device. The tuning capabilities of these new laser sources are sufficient to cover the wavelength range of several absorption lines. The delivered carbon monoxide sensor (COMA 1) reliably measures low carbon monoxide levels even in the presence of high methane signals. The signal bleed-over is determined during system calibration and is then accounted for in the system parameters. The sensor reports carbon monoxide concentrations reliably for (interfering) methane concentrations up to several thousand parts per million.

  2. A versatile computer-controlled pulsed nuclear quadrupole resonance spectrometer

    NASA Astrophysics Data System (ADS)

    Fisher, Gregory; MacNamara, Ernesto; Santini, Robert E.; Raftery, Daniel

    1999-12-01

    A new, pulsed nuclear quadrupole resonance (NQR) spectrometer capable of performing a variety of pulsed and swept experiments is described. The spectrometer features phase locked, superheterodyne detection using a commercial spectrum analyzer and a fully automatic, computer-controlled tuning and matching network. The tuning and matching network employs stepper motors which turn high power air gap capacitors in a "moving grid" optimization strategy to minimize the reflected power from a directional coupler. In the duplexer circuit, digitally controlled relays are used to switch different lengths of coax cable appropriate for the different radio frequencies. A home-built pulse programmer card controls the timing of radio frequency pulses sent to the probe, while data acquisition and control software is written in Microsoft Quick Basic. Spin-echo acquisition experiments are typically used to acquire the data, although a variety of pulse sequences can be employed. Scan times range from one to several hours depending upon the step resolution and the spectral range required for each experiment. Pure NQR spectra of NaNO2 and 3-aminopyridine are discussed.

  3. Fully automatic lesion segmentation in breast MRI using mean-shift and graph-cuts on a region adjacency graph.

    PubMed

    McClymont, Darryl; Mehnert, Andrew; Trakic, Adnan; Kennedy, Dominic; Crozier, Stuart

    2014-04-01

    To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI.

  4. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936

  5. A 1-2 GHz pulsed and continuous wave electron paramagnetic resonance spectrometer

    NASA Astrophysics Data System (ADS)

    Quine, Richard W.; Rinard, George A.; Ghim, Barnard T.; Eaton, Sandra S.; Eaton, Gareth R.

    1996-07-01

    A microwave bridge has been constructed that performs three types of electron paramagnetic resonance experiments: continuous wave, pulsed saturation recovery, and pulsed electron spin echo. Switching between experiment types can be accomplished via front-panel switches without moving the sample. Design features and performance of the bridge and of a resonator used in testing the bridge are described. The bridge is constructed of coaxial components connected with semirigid cable. Particular attention has been paid to low-noise design of the preamplifier and stability of automatic frequency control circuits. The bridge incorporates a Smith chart display and phase adjustment meter for ease of tuning.

  6. Stabilizing operation point technique based on the tunable distributed feedback laser for interferometric sensors

    NASA Astrophysics Data System (ADS)

    Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu

    2016-02-01

    We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.

  7. Testing Saliency Parameters for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Pandya, Sagar

    2012-01-01

    A bottom-up visual attention model (the saliency model) is tested to enhance the performance of Automated Target Recognition (ATR). JPL has developed an ATR system that identifies regions of interest (ROI) using a trained OT-MACH filter, and then classifies potential targets as true- or false-positives using machine-learning techniques. In this project, saliency is used as a pre-processing step to reduce the space for performing OT-MACH filtering. Saliency parameters, such as output level and orientation weight, are tuned to detect known target features. Preliminary results are promising and future work entails a rigrous and parameter-based search to gain maximum insight about this method.

  8. Self tuning system for industrial surveillance

    DOEpatents

    Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.

    2000-01-01

    A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.

  9. Power suppression at large scales in string inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflationmore » is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.« less

  10. Natural SUSY and the Higgs boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peisi

    2014-01-01

    Supersymmetry (SUSY) solves the hierarchy problem by introducing a super partner to each Standard Model(SM) particle. SUSY must be broken in nature, which means the fine-tuning is reintroduced to some level. Natural SUSY models enjoy low fine-tuning by featuring a small super potential parameter μ ~ 125 GeV, while the third generation squarks have mass less than 1.5 TeV. First and second generation sfermions can be at the multi-TeV level which yields a decoupling solution to the SUSY flavor and CP problem. However, models of Natural SUSY have difficulties in predicting a m{sub h} at 125 GeV, because the thirdmore » generation is too light to give large radiative correction to the Higgs mass. The models of Radiative Natural SUSY (RNS) address this problem by allowing for high scale soft SUSY breaking Higgs mass m{sub Hu} > m{sub 0}, which leads to automatic cancellation by the Renormalization Group (RG) running effect. Coupled with the large mixing in the stop sector, RNS allows low fine-tuning at 3-10 % level and a 125 GeV SM-like Higgs. RNS can be reached at the LHC, and a linear collider. If the strong CP problem is solved by the Peccei-Quinn mechanism, then RNS accommodates mixed axion-Higgsino cold dark matter, where the Higgsino-like WIMPs, which in this case make up only a fraction of the relic abundance, can be detectable at future WIMP detectors.« less

  11. Power suppression at large scales in string inflation

    NASA Astrophysics Data System (ADS)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  12. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  13. Second-order sliding mode controller with model reference adaptation for automatic train operation

    NASA Astrophysics Data System (ADS)

    Ganesan, M.; Ezhilarasi, D.; Benni, Jijo

    2017-11-01

    In this paper, a new approach to model reference based adaptive second-order sliding mode control together with adaptive state feedback is presented to control the longitudinal dynamic motion of a high speed train for automatic train operation with the objective of minimal jerk travel by the passengers. The nonlinear dynamic model for the longitudinal motion of the train comprises of a locomotive and coach subsystems is constructed using multiple point-mass model by considering the forces acting on the vehicle. An adaptation scheme using Lyapunov criterion is derived to tune the controller gains by considering a linear, stable reference model that ensures the stability of the system in closed loop. The effectiveness of the controller tracking performance is tested under uncertain passenger load, coupler-draft gear parameters, propulsion resistance coefficients variations and environmental disturbances due to side wind and wet rail conditions. The results demonstrate improved tracking performance of the proposed control scheme with a least jerk under maximum parameter uncertainties when compared to constant gain second-order sliding mode control.

  14. An Auto-Tuning PI Control System for an Open-Circuit Low-Speed Wind Tunnel Designed for Greenhouse Technology.

    PubMed

    Espinoza, Karlos; Valera, Diego L; Torres, José A; López, Alejandro; Molina-Aiz, Francisco D

    2015-08-12

    Wind tunnels are a key experimental tool for the analysis of airflow parameters in many fields of application. Despite their great potential impact on agricultural research, few contributions have dealt with the development of automatic control systems for wind tunnels in the field of greenhouse technology. The objective of this paper is to present an automatic control system that provides precision and speed of measurement, as well as efficient data processing in low-speed wind tunnel experiments for greenhouse engineering applications. The system is based on an algorithm that identifies the system model and calculates the optimum PI controller. The validation of the system was performed on a cellulose evaporative cooling pad and on insect-proof screens to assess its response to perturbations. The control system provided an accuracy of <0.06 m·s(-1) for airflow speed and <0.50 Pa for pressure drop, thus permitting the reproducibility and standardization of the tests. The proposed control system also incorporates a fully-integrated software unit that manages the tests in terms of airflow speed and pressure drop set points.

  15. Image simulation for automatic license plate recognition

    NASA Astrophysics Data System (ADS)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  16. Objective measures for quality assessment of automatic skin enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Ciuc, Mihai; Capata, Adrian; Florea, Corneliu

    2010-01-01

    Automatic portrait enhancement by attenuating skin flaws (pimples, blemishes, wrinkles, etc.) has received considerable attention from digital camera manufacturers thanks to its impact on the public. Subsequently, a number of algorithms have been developed to meet this need. One central aspect to developing such an algorithm is quality assessment: having a few numbers that precisely indicate the amount of beautification brought by an algorithm (as perceived by human observers) is of great help, as it works on circumvent time-costly human evaluation. In this paper, we propose a method to numerically evaluate the quality of a skin beautification algorithm. The most important aspects we take into account and quantize to numbers are the quality of the skin detector, the amount of smoothing performed by the method, the preservation of intrinsic skin texture, and the preservation of facial features. We combine these measures into two numbers that assess the quality of skin detection and beautification. The derived measures are highly correlated with human perception, therefore they constitute a helpful tool for tuning and comparing algorithms.

  17. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  18. Energy savings modelling of re-tuning energy conservation measures in large office buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, Nick; Katipamula, Srinivas; Wang, Weimin

    Today, many large commercial buildings use sophisticated building automation systems (BASs) to manage a wide range of building equipment. While the capabilities of BASs have increased over time, many buildings still do not fully use the BAS’s capabilities and are not properly commissioned, operated or maintained, which leads to inefficient operation, increased energy use, and reduced lifetimes of the equipment. This paper investigates the energy savings potential of several common HVAC system re-tuning measures on a typical large office building, using the Department of Energy’s building energy modeling software, EnergyPlus. The baseline prototype model uses roughly as much energy asmore » an average large office building in existing building stock, but does not utilize any re-tuning measures. Individual re-tuning measures simulated against this baseline include automatic schedule adjustments, damper minimum flow adjustments, thermostat adjustments, as well as dynamic resets (set points that change continuously with building and/or outdoor conditions) to static pressure, supply-air temperature, condenser water temperature, chilled and hot water temperature, and chilled and hot water differential pressure set points. Six combinations of these individual measures have been formulated – each designed to conform to limitations to implementation of certain individual measures that might exist in typical buildings. All the individual measures and combinations were simulated in 16 climate locations representative of specific U.S. climate zones. The modeling results suggest that the most effective energy savings measures are those that affect the demand-side of the building (air-systems and schedules). Many of the demand-side individual measures were capable of reducing annual total HVAC system energy consumption by over 20% in most cities that were modeled. Supply side measures affecting HVAC plant conditions were only modestly successful (less than 5% annual HVAC energy savings for most cities for all measures). Combining many of the re-tuning measures revealed deep savings potential. Some of the more aggressive combinations revealed 35-75% reductions in annual HVAC energy consumption, depending on climate and building vintage.« less

  19. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis.

    PubMed

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2017-10-01

    This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.

  20. Detecting REM sleep from the finger: an automatic REM sleep algorithm based on peripheral arterial tone (PAT) and actigraphy.

    PubMed

    Herscovici, Sarah; Pe'er, Avivit; Papyan, Surik; Lavie, Peretz

    2007-02-01

    Scoring of REM sleep based on polysomnographic recordings is a laborious and time-consuming process. The growing number of ambulatory devices designed for cost-effective home-based diagnostic sleep recordings necessitates the development of a reliable automatic REM sleep detection algorithm that is not based on the traditional electroencephalographic, electrooccolographic and electromyographic recordings trio. This paper presents an automatic REM detection algorithm based on the peripheral arterial tone (PAT) signal and actigraphy which are recorded with an ambulatory wrist-worn device (Watch-PAT100). The PAT signal is a measure of the pulsatile volume changes at the finger tip reflecting sympathetic tone variations. The algorithm was developed using a training set of 30 patients recorded simultaneously with polysomnography and Watch-PAT100. Sleep records were divided into 5 min intervals and two time series were constructed from the PAT amplitudes and PAT-derived inter-pulse periods in each interval. A prediction function based on 16 features extracted from the above time series that determines the likelihood of detecting a REM epoch was developed. The coefficients of the prediction function were determined using a genetic algorithm (GA) optimizing process tuned to maximize a price function depending on the sensitivity, specificity and agreement of the algorithm in comparison with the gold standard of polysomnographic manual scoring. Based on a separate validation set of 30 patients overall sensitivity, specificity and agreement of the automatic algorithm to identify standard 30 s epochs of REM sleep were 78%, 92%, 89%, respectively. Deploying this REM detection algorithm in a wrist worn device could be very useful for unattended ambulatory sleep monitoring. The innovative method of optimization using a genetic algorithm has been proven to yield robust results in the validation set.

  1. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  2. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains

    PubMed Central

    Souza, Junior Silva; da Silva, Gercina Gonçalves

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  3. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  4. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  5. Supercomputer simulations of structure formation in the Universe

    NASA Astrophysics Data System (ADS)

    Ishiyama, Tomoaki

    2017-06-01

    We describe the implementation and performance results of our massively parallel MPI†/OpenMP‡ hybrid TreePM code for large-scale cosmological N-body simulations. For domain decomposition, a recursive multi-section algorithm is used and the size of domains are automatically set so that the total calculation time is the same for all processes. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. For two trillion particles benchmark simulation, the average performance on the fullsystem of K computer (82,944 nodes, the total number of core is 663,552) is 5.8 Pflops, which corresponds to 55% of the peak speed.

  6. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  7. Automatic theory generation from analyst text files using coherence networks

    NASA Astrophysics Data System (ADS)

    Shaffer, Steven C.

    2014-05-01

    This paper describes a three-phase process of extracting knowledge from analyst textual reports. Phase 1 involves performing natural language processing on the source text to extract subject-predicate-object triples. In phase 2, these triples are then fed into a coherence network analysis process, using a genetic algorithm optimization. Finally, the highest-value sub networks are processed into a semantic network graph for display. Initial work on a well- known data set (a Wikipedia article on Abraham Lincoln) has shown excellent results without any specific tuning. Next, we ran the process on the SYNthetic Counter-INsurgency (SYNCOIN) data set, developed at Penn State, yielding interesting and potentially useful results.

  8. Wind turbine extraction from high spatial resolution remote sensing images based on saliency detection

    NASA Astrophysics Data System (ADS)

    Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu

    2018-01-01

    The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.

  9. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    PubMed Central

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. PMID:26346558

  10. Intervertebral disc detection in X-ray images using faster R-CNN.

    PubMed

    Ruhan Sa; Owens, William; Wiegand, Raymond; Studin, Mark; Capoferri, Donald; Barooha, Kenneth; Greaux, Alexander; Rattray, Robert; Hutton, Adam; Cintineo, John; Chaudhary, Vipin

    2017-07-01

    Automatic identification of specific osseous landmarks on the spinal radiograph can be used to automate calculations for correcting ligament instability and injury, which affect 75% of patients injured in motor vehicle accidents. In this work, we propose to use deep learning based object detection method as the first step towards identifying landmark points in lateral lumbar X-ray images. The significant breakthrough of deep learning technology has made it a prevailing choice for perception based applications, however, the lack of large annotated training dataset has brought challenges to utilizing the technology in medical image processing field. In this work, we propose to fine tune a deep network, Faster-RCNN, a state-of-the-art deep detection network in natural image domain, using small annotated clinical datasets. In the experiment we show that, by using only 81 lateral lumbar X-Ray training images, one can achieve much better performance compared to traditional sliding window detection method on hand crafted features. Furthermore, we fine-tuned the network using 974 training images and tested on 108 images, which achieved average precision of 0.905 with average computation time of 3 second per image, which greatly outperformed traditional methods in terms of accuracy and efficiency.

  11. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  12. Lévy flight artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She

    2016-08-01

    Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.

  13. Computer-aided classification of lung nodules on computed tomography images via deep learning technique.

    PubMed

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.

  14. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  15. MCMAC-cVT: a novel on-line associative memory based CVT transmission control system.

    PubMed

    Ang, K K; Quek, C; Wahab, A

    2002-03-01

    This paper describes a novel application of an associative memory called the Modified Cerebellar Articulation Controller (MCMAC) (Int. J. Artif. Intell. Engng, 10 (1996) 135) in a continuous variable transmission (CVT) control system. It allows the on-line tuning of the associative memory and produces an effective gain-schedule for the automatic selection of the CVT gear ratio. Various control algorithms are investigated to control the CVT gear ratio to maintain the engine speed within a narrow range of efficient operating speed independently of the vehicle velocity. Extensive simulation results are presented to evaluate the control performance of a direct digital PID control algorithm with auto-tuning (Trans. ASME, 64 (1942)) and anti-windup mechanism. In particular, these results are contrasted against the control performance produced using the MCMAC (Int. J. Artif. Intell. Engng, 10 (1996) 135) with momentum, neighborhood learning and Averaged Trapezoidal Output (MCMAC-ATO) as the neural control algorithm for controlling the CVT. Simulation results are presented that show the reduced control fluctuations and improved learning capability of the MCMAC-ATO without incurring greater memory requirement. In particular, MCMAC-ATO is able to learn and control the CVT simultaneously while still maintaining acceptable control performance.

  16. Transition-Tempered Metadynamics: Robust, Convergent Metadynamics via On-the-Fly Transition Barrier Estimation.

    PubMed

    Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A

    2014-09-09

    Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.

  17. Sharing control with haptics: seamless driver support from manual to automatic control.

    PubMed

    Mulder, Mark; Abbink, David A; Boer, Erwin R

    2012-10-01

    Haptic shared control was investigated as a human-machine interface that can intuitively share control between drivers and an automatic controller for curve negotiation. As long as automation systems are not fully reliable, a role remains for the driver to be vigilant to the system and the environment to catch any automation errors. The conventional binary switches between supervisory and manual control has many known issues, and haptic shared control is a promising alternative. A total of 42 respondents of varying age and driving experience participated in a driving experiment in a fixed-base simulator, in which curve negotiation behavior during shared control was compared to during manual control, as well as to three haptic tunings of an automatic controller without driver intervention. Under the experimental conditions studied, the main beneficial effect of haptic shared control compared to manual control was that less control activity (16% in steering wheel reversal rate, 15% in standard deviation of steering wheel angle) was needed for realizing an improved safety performance (e.g., 11% in peak lateral error). Full automation removed the need for any human control activity and improved safety performance (e.g., 35% in peak lateral error) but put the human in a supervisory position. Haptic shared control kept the driver in the loop, with enhanced performance at reduced control activity, mitigating the known issues that plague full automation. Haptic support for vehicular control ultimately seeks to intuitively combine human intelligence and creativity with the benefits of automation systems.

  18. Phase coherence adaptive processor for automatic signal detection and identification

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2006-05-01

    A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.

  19. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  20. Endocavitary thermal therapy by MRI-guided phased-array contact ultrasound: experimental and numerical studies on the multi-input single-output PID temperature controller's convergence and stability.

    PubMed

    Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Petrusca, Lorena; Auboiroux, Vincent; Cotton, François

    2009-10-01

    Endocavitary high intensity contact ultrasound (HICU) may offer interesting therapeutic potential for fighting localized cancer in esophageal or rectal wall. On-line MR guidance of the thermotherapy permits both excellent targeting of the pathological volume and accurate preoperatory monitoring of the temperature elevation. In this article, the authors address the issue of the automatic temperature control for endocavitary phased-array HICU and propose a tailor-made thermal model for this specific application. The convergence and stability of the feedback loop were investigated against tuning errors in the controller's parameters and against input noise, through ex vivo experimental studies and through numerical simulations in which nonlinear response of tissue was considered as expected in vivo. An MR-compatible, 64-element, cooled-tip, endorectal cylindrical phased-array applicator of contact ultrasound was integrated with fast MR thermometry to provide automatic feedback control of the temperature evolution. An appropriate phase law was applied per set of eight adjacent transducers to generate a quasiplanar wave, or a slightly convergent one (over the circular dimension). A 2D physical model, compatible with on-line numerical implementation, took into account (1) the ultrasound-mediated energy deposition, (2) the heat diffusion in tissue, and (3) the heat sink effect in the tissue adjacent to the tip-cooling balloon. This linear model was coupled to a PID compensation algorithm to obtain a multi-input single-output static-tuning temperature controller. Either the temperature at one static point in space (situated on the symmetry axis of the beam) or the maximum temperature in a user-defined ROI was tracked according to a predefined target curve. The convergence domain in the space of controller's parameters was experimentally explored ex vivo. The behavior of the static-tuning PID controller was numerically simulated based on a discrete-time iterative solution of the bioheat transfer equation in 3D and considering temperature-dependent ultrasound absorption and blood perfusion. The intrinsic accuracy of the implemented controller was approximately 1% in ex vivo trials when providing correct estimates for energy deposition and heat diffusivity. Moreover, the feedback loop demonstrated excellent convergence and stability over a wide range of the controller's parameters, deliberately set to erroneous values. In the extreme case of strong underestimation of the ultrasound energy deposition in tissue, the temperature tracking curve alone, at the initial stage of the MR-controlled HICU treatment, was not a sufficient indicator for a globally stable behavior of the feedback loop. Our simulations predicted that the controller would be able to compensate for tissue perfusion and for temperature-dependent ultrasound absorption, although these effects were not included in the controller's equation. The explicit pattern of acoustic field was not required as input information for the controller, avoiding time-consuming numerical operations. The study demonstrated the potential advantages of PID-based automatic temperature control adapted to phased-array MR-guided HICU therapy. Further studies will address the integration of this ultrasound device with a miniature RF coil for high resolution MRI and, subsequently, the experimental behavior of the controller in vivo.

  1. An Auto-Tuning PI Control System for an Open-Circuit Low-Speed Wind Tunnel Designed for Greenhouse Technology

    PubMed Central

    Espinoza, Karlos; Valera, Diego L.; Torres, José A.; López, Alejandro; Molina-Aiz, Francisco D.

    2015-01-01

    Wind tunnels are a key experimental tool for the analysis of airflow parameters in many fields of application. Despite their great potential impact on agricultural research, few contributions have dealt with the development of automatic control systems for wind tunnels in the field of greenhouse technology. The objective of this paper is to present an automatic control system that provides precision and speed of measurement, as well as efficient data processing in low-speed wind tunnel experiments for greenhouse engineering applications. The system is based on an algorithm that identifies the system model and calculates the optimum PI controller. The validation of the system was performed on a cellulose evaporative cooling pad and on insect-proof screens to assess its response to perturbations. The control system provided an accuracy of <0.06 m·s−1 for airflow speed and <0.50 Pa for pressure drop, thus permitting the reproducibility and standardization of the tests. The proposed control system also incorporates a fully-integrated software unit that manages the tests in terms of airflow speed and pressure drop set points. PMID:26274962

  2. Development and Operation of an Automatic Rotor Trim Control System for the UH-60 Individual Blade Control Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.; Tischler, Mark B.

    2010-01-01

    An automatic rotor trim control system was developed and successfully used during a wind tunnel test of a full-scale UH-60 rotor system with Individual Blade Control (IBC) actuators. The trim control system allowed rotor trim to be set more quickly, precisely and repeatably than in previous wind tunnel tests. This control system also allowed the rotor trim state to be maintained during transients and drift in wind tunnel flow, and through changes in IBC actuation. The ability to maintain a consistent rotor trim state was key to quickly and accurately evaluating the effect of IBC on rotor performance, vibration, noise and loads. This paper presents details of the design and implementation of the trim control system including the rotor system hardware, trim control requirements, and trim control hardware and software implementation. Results are presented showing the effect of IBC on rotor trim and dynamic response, a validation of the rotor dynamic simulation used to calculate the initial control gains and tuning of the control system, and the overall performance of the trim control system during the wind tunnel test.

  3. Patient-Specific Deep Architectural Model for ECG Classification

    PubMed Central

    Luo, Kan; Cuschieri, Alfred

    2017-01-01

    Heartbeat classification is a crucial step for arrhythmia diagnosis during electrocardiographic (ECG) analysis. The new scenario of wireless body sensor network- (WBSN-) enabled ECG monitoring puts forward a higher-level demand for this traditional ECG analysis task. Previously reported methods mainly addressed this requirement with the applications of a shallow structured classifier and expert-designed features. In this study, modified frequency slice wavelet transform (MFSWT) was firstly employed to produce the time-frequency image for heartbeat signal. Then the deep learning (DL) method was performed for the heartbeat classification. Here, we proposed a novel model incorporating automatic feature abstraction and a deep neural network (DNN) classifier. Features were automatically abstracted by the stacked denoising auto-encoder (SDA) from the transferred time-frequency image. DNN classifier was constructed by an encoder layer of SDA and a softmax layer. In addition, a deterministic patient-specific heartbeat classifier was achieved by fine-tuning on heartbeat samples, which included a small subset of individual samples. The performance of the proposed model was evaluated on the MIT-BIH arrhythmia database. Results showed that an overall accuracy of 97.5% was achieved using the proposed model, confirming that the proposed DNN model is a powerful tool for heartbeat pattern recognition. PMID:29065597

  4. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  5. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  6. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  7. High-temperature microphone system. [for measuring pressure fluctuations in gases at high temperature

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J. (Inventor)

    1979-01-01

    Pressure fluctuations in air or other gases in an area of elevated temperature are measured using a condenser microphone located in the area of elevated temperature and electronics for processing changes in the microphone capacitance located outside the area the area and connected to the microphone by means of high-temperature cable assembly. The microphone includes apparatus for decreasing the undesirable change in microphone sensitivity at high temperatures. The high temperature cable assembly operates as a half-wavelength transmission line in an AM carrier system and maintains a large temperature gradient between the two ends of the cable assembly. The processing electronics utilizes a voltage controlled oscillator for automatic tuning thereby increasing the sensitivity of the measuring apparatus.

  8. An automatic experimental apparatus to study arm reaching in New World monkeys.

    PubMed

    Yin, Allen; An, Jehi; Lehew, Gary; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2016-05-01

    Several species of the New World monkeys have been used as experimental models in biomedical and neurophysiological research. However, a method for controlled arm reaching tasks has not been developed for these species. We have developed a fully automated, pneumatically driven, portable, and reconfigurable experimental apparatus for arm-reaching tasks suitable for these small primates. We have utilized the apparatus to train two owl monkeys in a visually-cued arm-reaching task. Analysis of neural recordings demonstrates directional tuning of the M1 neurons. Our apparatus allows automated control, freeing the experimenter from manual experiments. The presented apparatus provides a valuable tool for conducting neurophysiological research on New World monkeys. Copyright © 2016. Published by Elsevier B.V.

  9. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  10. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population

    PubMed Central

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial—it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context. PMID:29615890

  11. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population.

    PubMed

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial-it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context.

  12. Modal characterization of the ASCIE segmented optics testbed: New algorithms and experimental results

    NASA Technical Reports Server (NTRS)

    Carrier, Alain C.; Aubrun, Jean-Noel

    1993-01-01

    New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.

  13. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  14. Real-time Automatic Detectors of P and S Waves Using Singular Values Decomposition

    NASA Astrophysics Data System (ADS)

    Kurzon, I.; Vernon, F.; Rosenberger, A.; Ben-Zion, Y.

    2013-12-01

    We implement a new method for the automatic detection of the primary P and S phases using Singular Value Decomposition (SVD) analysis. The method is based on a real-time iteration algorithm of Rosenberger (2010) for the SVD of three component seismograms. Rosenberger's algorithm identifies the incidence angle by applying SVD and separates the waveforms into their P and S components. We have been using the same algorithm with the modification that we filter the waveforms prior to the SVD, and then apply SNR (Signal-to-Noise Ratio) detectors for picking the P and S arrivals, on the new filtered+SVD-separated channels. A recent deployment in San Jacinto Fault Zone area provides a very dense seismic network that allows us to test the detection algorithm in diverse setting, such as: events with different source mechanisms, stations with different site characteristics, and ray paths that diverge from the SVD approximation used in the algorithm, (e.g., rays propagating within the fault and recorded on linear arrays, crossing the fault). We have found that a Butterworth band-pass filter of 2-30Hz, with four poles at each of the corner frequencies, shows the best performance in a large variety of events and stations within the SJFZ. Using the SVD detectors we obtain a similar number of P and S picks, which is a rare thing to see in ordinary SNR detectors. Also for the actual real-time operation of the ANZA and SJFZ real-time seismic networks, the above filter (2-30Hz) shows a very impressive performance, tested on many events and several aftershock sequences in the region from the MW 5.2 of June 2005, through the MW 5.4 of July 2010, to MW 4.7 of March 2013. Here we show the results of testing the detectors on the most complex and intense aftershock sequence, the MW 5.2 of June 2005, in which in the very first hour there were ~4 events a minute. This aftershock sequence was thoroughly reviewed by several analysts, identifying 294 events in the first hour, located in a condensed cluster around the main shock. We used this hour of events to fine-tune the automatic SVD detection, association and location of the real-time system, reaching a 37% automatic identification and location of events, with a minimum of 10 stations per event, all events fall within the same condensed cluster and there are no false events or large offsets of their locations. An ordinary SNR detector did not exceed the 11% success with a minimum of 8 stations per event, 2 false events and a wider spread of events (not within the reviewed cluster). One of the main advantages of the SVD detectors for real-time operations is the actual separation between the P and S components, by that significantly reducing the noise of picks detected by ordinary SNR detectors. The new method has been applied for a significant amount of events within the SJFZ in the past 8 years, and is now in the final stage of real-time implementation in UCSD for the ANZA and SJFZ networks, tuned for automatic detection and location of local events.

  15. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

    PubMed Central

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-01-01

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781

  16. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.

    PubMed

    Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie

    2017-02-09

    An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

  17. Comparison of quartz tuning forks and AlN-based extensional microresonators for viscosity measurements in oil/fuel mixtures

    NASA Astrophysics Data System (ADS)

    Toledo, J.; Manzaneque, T.; Hernando-García, J.; Vazquez, J.; Ababneh, A.; Seidel, H.; Lapuerta, M.; Sánchez-Rojas, J. L.

    2013-05-01

    In-situ monitoring of the physical properties of liquids is of great interest in the automotive industry. For example, lubricants are subject to dilution with diesel fuel as a consequence of late-injection processes, which are necessary for regenerating diesel particulate filters. This dilution can be determined by tracking the viscosity and the density of the lubricant. Here we report the test of two in-plane movement based resonators to explore their capability to monitor oil dilution with diesel and biodiesel. One of the resonators is the commercially available millimeter-sized quartz tuning fork, working at 32.7 kHz. The second resonator is a state-of-the-art micron-sized AlN-based rectangular plate, actuated in the first extensional mode in the MHz range. Electrical impedance measurements were carried out to characterize the performance of the structures in various liquid media in a wide range of viscosities. These measurements were completed with the development of low-cost electronic circuits to track the resonance frequency and the quality factor automatically, these two parameters allow to obtain the viscosity of various fluids under investigation, as in the case of dilution of lubricant SAE 15W40 and biodiesel.

  18. Optimizing the Placement of Burnable Poisons in PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Serkan; Ivanov, Kostadin; Levine, Samuel

    2005-07-15

    The principal focus of this work is on developing a practical tool for designing the minimum amount of burnable poisons (BPs) for a pressurized water reactor using a typical Three Mile Island Unit 1 2-yr cycle as the reference design. The results of this study are to be applied to future reload designs. A new method, the Modified Power Shape Forced Diffusion (MPSFD) method, is presented that initially computes the BP cross section to force the power distribution into a desired shape. The method employs a simple formula that expresses the BP cross section as a function of the differencemore » between the calculated radial power distributions (RPDs) and the limit set for the maximum RPD. This method places BPs into all fresh fuel assemblies (FAs) having an RPD greater than the limit. The MPSFD method then reduces the BP content by reducing the BPs in fresh FAs with the lowest RPDs. Finally, the minimum BP content is attained via a heuristic fine-tuning procedure.This new BP design program has been automated by incorporating the new MPSFD method in conjunction with the heuristic fine-tuning program. The program has automatically produced excellent results for the reference core, and has the potential to reduce fuel costs and save manpower.« less

  19. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  20. Crucial issues of multi-beam feed-back control with ECH/ECCD in fusion plasmas

    NASA Astrophysics Data System (ADS)

    Cirant, S.; Berrino, J.; Gandini, F.; Granucci, G.; Iannone, F.; Lazzaro, E.; D'Antona, G.; Farina, D.; Koppenburg, K.; Nowak, S.; Ramponi, G.

    2005-01-01

    Proof of principle of feed-back controlled Electron Cyclotron Heating and Current Drive (ECH/ECCD), aiming at automatic limitation (or suppression) of Neoclassical Tearing Modes amplitude, has been achieved in a number of present machines. In addition to Neoclassical Tearing Mode stabilization, more applications of well-localized ECH/ECCD can be envisaged (saw-tooth crash control, current profile control, thermal barrier control, disruption mitigation). However, in order to be able to take a step forward towards the application of these techniques to burning plasmas, some crucial issues should be more deeply analyzed: multi-beam simultaneous action, control of deposition radii rdep, diagnostic of plasma reaction. So far the Electron Cyclotron Emission has been the most important tool to get localized information on plasma response, essential for both rdep and risland recognition, but its use in very hot burning plasmas within automatic control loops should be carefully verified. Assuming that plasma response is appropriately diagnosed, the next matter to be discussed concerns how to control rdep, since all techniques so far used, or proposed (plasma position, toroidal field, mechanical beam steering, gyrotron frequency tuning) have limitations or drawbacks. Finally, simultaneous multiple actions on many actuators (EC beams), concurring to automatic control of one single parameter (e.g. NTM amplitude) might be a challenging task for the controller, particularly in view of the fact that any effect of each beam becomes visible only when it is positioned very close to the right radius. All these interlinked aspects are discussed in the paper.

  1. Radiative natural supersymmetry: Reconciling electroweak fine-tuning and the Higgs boson mass

    NASA Astrophysics Data System (ADS)

    Baer, Howard; Barger, Vernon; Huang, Peisi; Mickelson, Dan; Mustafayev, Azar; Tata, Xerxes

    2013-06-01

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light Higgsinos ≲200-300GeV and light top squarks ≲600GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak fine-tuning and satisfy the LHC constraints. However, in the context of the minimal supersymmetric standard model, they predict too low a value of mh, are frequently in conflict with the measured b→sγ branching fraction, and the relic density of thermally produced Higgsino-like weakly interacting massive particles (WIMPs) falls well below dark matter measurements. We propose a framework dubbed radiative natural supersymmetry (RNS), which can be realized within the minimal supersymmetric standard model (avoiding the addition of extra exotic matter) and which maintains features such as gauge coupling unification and radiative electroweak symmetry breaking. The RNS model can be generated from supersymmetry (SUSY) grand unified theory type models with nonuniversal Higgs masses. Allowing for high-scale soft SUSY breaking Higgs mass mHu>m0 leads to automatic cancellations during renormalization group running and to radiatively-induced low fine-tuning at the electroweak scale. Coupled with large mixing in the top-squark sector, RNS allows for fine-tuning at the 3%-10% level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. The model allows for at least a partial solution to the SUSY flavor, CP, and gravitino problems since first-/second-generation scalars (and the gravitino) may exist in the 10-30 TeV regime. We outline some possible signatures for RNS at the LHC, such as the appearance of low invariant mass opposite-sign isolated dileptons from gluino cascade decays. The smoking gun signature for RNS is the appearance of light Higgsinos at a linear e+e- collider. If the strong CP problem is solved by the Peccei-Quinn mechanism, then RNS naturally accommodates mixed axion-Higgsino cold dark matter, where the light Higgsino-like WIMPs—which in this case make up only a fraction of the measured relic abundance—should be detectable at upcoming WIMP detectors.

  2. Triplet Tuning - a New ``BLACK-BOX'' Computational Scheme for Photochemically Active Molecules

    NASA Astrophysics Data System (ADS)

    Lin, Zhou; Van Voorhis, Troy

    2017-06-01

    Density functional theory (DFT) is an efficient computational tool that plays an indispensable role in the design and screening of π-conjugated organic molecules with photochemical significance. However, due to intrinsic problems in DFT such as self-interaction error, the accurate prediction of energy levels is still a challenging task. Functionals can be parameterized to correct these problems, but the parameters that make a well-behaved functional are system-dependent rather than universal in most cases. To alleviate both problems, optimally tuned range-separated hybrid functionals were introduced, in which the range-separation parameter, ω, can be adjusted to impose Koopman's theorem, ɛ_{HOMO} = -I. These functionals turned out to be good estimators for asymptotic properties like ɛ_{HOMO} and ɛ_{LUMO}. In the present study, we propose a ``black-box'' procedure that allows an automatic construction of molecule-specific range-separated hybrid functionals following the idea of such optimal tuning. However, instead of focusing on ɛ_{HOMO} and ɛ_{LUMO}, we target more local, photochemistry-relevant energy levels such as the lowest triplet state, T_1. In practice, we minimize the difference between two E_{{T}_1}'s that are obtained from two DFT-based approaches, Δ-SCF and linear-response TDDFT. We achieve this minimization using a non-empirical adjustment of two parameters in the range-separated hybrid functional - ω, and the percentage of Hartree-Fock contribution in the short-range exchange, c_{HF}. We apply this triplet tuning scheme to a variety of organic molecules with important photochemical applications, including laser dyes, photovoltaics, and light-emitting diodes, and achieved good agreements with the spectroscopic measurements for E_{{T}_1}'s and related local properties. A. Dreuw and M. Head-Gordon, Chem. Rev. 105, 4009 (2015). O. A. Vydrov and G. E. Scuseria, J. Chem. Phys. 125, 234109 (2006). L. Kronik, T. Stein, S. Refaely-Abramson, and R. Baer, J. Chem. Theory Comput. 8, 1515 (2012). Z. Lin and T. A. Van Voorhis, in preparation for submission to J. Chem. Theory Comput.

  3. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  4. Frequency-agile THz-wave generation and detection system using nonlinear frequency conversion at room temperature.

    PubMed

    Guo, Ruixiang; Ikar'i, Tomofumi; Zhang, Jun; Minamide, Hiroaki; Ito, Hiromasa

    2010-08-02

    A surface-emitting THz parametric oscillator is set up to generate a narrow-linewidth, nanosecond pulsed THz-wave radiation. The THz-wave radiation is coherently detected using the frequency up-conversion in MgO: LiNbO(3) crystal. Fast frequency tuning and automatic achromatic THz-wave detection are achieved through a special optical design, including a variable-angle mirror and 1:1 telescope devices in the pump and THz-wave beams. We demonstrate a frequency-agile THz-wave parametric generation and THz-wave coherent detection system. This system can be used as a frequency-domain THz-wave spectrometer operated at room-temperature, and there are a high possible to develop into a real-time two-dimensional THz spectral imaging system.

  5. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features.

    PubMed

    Abbas, Qaisar; Fondon, Irene; Sarmiento, Auxiliadora; Jiménez, Soledad; Alemany, Pedro

    2017-11-01

    Diabetic retinopathy (DR) is leading cause of blindness among diabetic patients. Recognition of severity level is required by ophthalmologists to early detect and diagnose the DR. However, it is a challenging task for both medical experts and computer-aided diagnosis systems due to requiring extensive domain expert knowledge. In this article, a novel automatic recognition system for the five severity level of diabetic retinopathy (SLDR) is developed without performing any pre- and post-processing steps on retinal fundus images through learning of deep visual features (DVFs). These DVF features are extracted from each image by using color dense in scale-invariant and gradient location-orientation histogram techniques. To learn these DVF features, a semi-supervised multilayer deep-learning algorithm is utilized along with a new compressed layer and fine-tuning steps. This SLDR system was evaluated and compared with state-of-the-art techniques using the measures of sensitivity (SE), specificity (SP) and area under the receiving operating curves (AUC). On 750 fundus images (150 per category), the SE of 92.18%, SP of 94.50% and AUC of 0.924 values were obtained on average. These results demonstrate that the SLDR system is appropriate for early detection of DR and provide an effective treatment for prediction type of diabetes.

  6. A 3000 TNOs Survey Project at ESO La Silla

    NASA Astrophysics Data System (ADS)

    Boehnhardt, H.; Hainaut, O.

    We propose a wide-shallow TNO search to be done with the Wide Field Imager (WFI) instrument at the 2.2m MPG/ESO telescope in La Silla/Chile. The WFI is a half-deg camera equipped with an 8kx8k CCD (0.24 arcsec/pixel). The telescope can support excellent seeing quality down to 0.5arcsec FWHM. A TNO search pilot project was run with the 2.2m+WFI in 1999: images with just 1.6sdeg sky coverage and typically 24mag limiting brightness revealed 6 new TNOs when processed with our new automatic detection program MOVIE. The project is now continued on a somewhat larger scale in order to find more TNOs and to fine-tune the operational environment for a full automatic on-line detection, astrometry and photometry of the objects at the telescope. The future goal is to perform - with the 2.2m+WFI and in an international colaboration - an even larger TNO survey over a major part of the sky (typically 2000sdeg in and out of Ecliptic) down to 24mag. Follow-up astrometry and photometry of the expected more than 3000 discovered objects will secure their orbital and physical characterisation for synoptic dynamical and taxonomic studies of the Transneptunian population.

  7. Tuning time-frequency methods for the detection of metered HF speech

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas J.; Smith, Lawrence H.

    2002-12-01

    Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.

  8. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  9. Automatic information extraction from unstructured mammography reports using distributed semantics.

    PubMed

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss.

    PubMed

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-09-18

    The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

  11. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines.

    PubMed

    Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-07-30

    Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. An investigation into the design and performance of an automatic shape control system for a Sendzimir cold rolling mill

    NASA Astrophysics Data System (ADS)

    Dutton, Kenneth

    Shape (or flatness) control for rolled steel strip is becoming increasingly important as customer requirements become more stringent. Automatic shape control is now more or less mandatory on all new four-high cold mills, but no comprehensive scheme yet exists on a Sendzimir mill. This is due to the complexity of the control system design on such a mill, where many more degrees of freedom for control exist than is the case with the four-high mills.The objective of the current work is to develop, from first principles, such a system; including automatic control of the As-U-Roll and first intermediate roll actuators in response to the measured strip shape. This thesis concerns itself primarily with the As-U-Roll control system. The material presented is extremely wide-ranging. Areas covered include the development of original static and dynamic mathematical models of the mill systems, and testing of the plant by data-logging to tune these models. A basic control system philosophy proposed by other workers is modified and developed to suit the practical system requirements and the data provided by the models. The control strategy is tested by comprehensive multivariable simulation studies. Finally, details are given of the practical problems faced when installing the system on the plant. These include problems of manual control inter-action bumpless transfer and integral desaturation.At the time of presentation of the thesis, system commissioning is still in progress and production results are therefore not yet available. Nevertheless, the simulation studies predict a successful outcome, although performance is expected to be limited until the first intermediate roll actuators are eventually included in the scheme also.

  13. Provenance-Powered Automatic Workflow Generation and Composition

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.

    2015-12-01

    In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.

  14. Optimized PID control of depth of hypnosis in anesthesia.

    PubMed

    Padula, Fabrizio; Ionescu, Clara; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio; Vivacqua, Giulio

    2017-06-01

    This paper addresses the use of proportional-integral-derivative controllers for regulating the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. In fact, introducing an automatic control system might provide significant benefits for the patient in reducing the risk for under- and over-dosing. In this study, the controller parameters are obtained through genetic algorithms by solving a min-max optimization problem. A set of 12 patient models representative of a large population variance is used to test controller robustness. The worst-case performance in the considered population is minimized considering two different scenarios: the induction case and the maintenance case. Our results indicate that including a gain scheduling strategy enables optimal performance for induction and maintenance phases, separately. Using a single tuning to address both tasks may results in a loss of performance up to 102% in the induction phase and up to 31% in the maintenance phase. Further on, it is shown that a suitably designed low-pass filter on the controller output can handle the trade-off between the performance and the noise effect in the control variable. Optimally tuned PID controllers provide a fast induction time with an acceptable overshoot and a satisfactory disturbance rejection performance during maintenance. These features make them a very good tool for comparison when other control algorithms are developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Improving piezo actuators for nanopositioning tasks

    NASA Astrophysics Data System (ADS)

    Seeliger, Martin; Gramov, Vassil; Götz, Bernt

    2018-02-01

    In recent years, numerous applications emerged on the market with seemingly contradicting demands. On one side, the structure size decreased while on the other side, the overall sample size and speed of operation increased. Although the principle usage of piezoelectric positioning solutions has become a standard in the field of micro- and nanopositioning, surface inspection and manipulation, piezosystem jena now enhanced the performance beyond simple control loop tuning and actuator design. In automated manufacturing machines, a given signal has to be tracked fast and precise. However, control systems naturally decrease the ability to follow this signal in real time. piezosystem jena developed a new signal feed forward system bypassing the PID control. This way, we could reduce signal tracking errors by a factor of three compared to a conventionally optimized PID control. Of course, PID-values still have to be adjusted to specific conditions, e.g. changing additional mass, to optimize the performance. This can now be done with a new automatic tuning tool designed to analyze the current setup, find the best fitting configuration, and also gather and display theoretical as well as experimental performance data. Thus, the control quality of a mechanical setup can be improved within a few minutes without the need of external calibration equipment. Furthermore, new mechanical optimization techniques that focus not only on the positioning device, but also take the whole setup into account, prevent parasitic motion down to a few nanometers.

  16. Data-oriented scheduling for PROOF

    NASA Astrophysics Data System (ADS)

    Xu, Neng; Guan, Wen; Wu, Sau Lan; Ganis, Gerardo

    2011-12-01

    The Parallel ROOT Facility - PROOF - is a distributed analysis system optimized for I/O intensive analysis tasks of HEP data. With LHC entering the analysis phase, PROOF has become a natural ingredient for computing farms at Tier3 level. These analysis facilities will typically be used by a few tenths of users, and can also be federated into a sort of analysis cloud corresponding to the Virtual Organization of the experiment. Proper scheduling is required to guarantee fair resource usage, to enforce priority policies and to optimize the throughput. In this paper we discuss an advanced priority system that we are developing for PROOF. The system has been designed to automatically adapt to unknown length of the tasks, to take into account the data location and availability (including distribution across geographically separated sites), and the {group, user} default priorities. In this system, every element - user, group, dataset, job slot and storage - gets its priority and those priorities are dynamically linked with each other. In order to tune the interplay between the various components, we have designed and started implementing a simulation application that can model various type and size of PROOF clusters. In this application a monitoring package records all the changes of them so that we can easily understand and tune the performance. We will discuss the status of our simulation and show examples of the results we are expecting from it.

  17. A Novel Controller Design for the Next Generation Space Electrostatic Accelerometer Based on Disturbance Observation and Rejection.

    PubMed

    Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing

    2016-12-23

    The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8 × 10 - 13 m / s 2 / H z 1 / 2 , which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm / Hz 1 / 2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching.

  18. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  19. Tuning collective communication for Partitioned Global Address Space programming models

    DOE PAGES

    Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...

    2011-06-12

    Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less

  20. A Novel Controller Design for the Next Generation Space Electrostatic Accelerometer Based on Disturbance Observation and Rejection

    PubMed Central

    Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing

    2016-01-01

    The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8×10−13m/s2/Hz1/2, which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm/Hz1/2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching. PMID:28025534

  1. Strange Beta: Chaotic Variations for Indoor Rock Climbing Route Setting

    NASA Astrophysics Data System (ADS)

    Phillips, Caleb; Bradley, Elizabeth

    2011-04-01

    In this paper we apply chaotic systems to the task of sequence variation for the purpose of aiding humans in setting indoor rock climbing routes. This work expands on prior work where similar variations were used to assist in dance choreography and music composition. We present a formalization for transcription of rock climbing problems and a variation generator that is tuned for this domain and addresses some confounding problems, including a new approach to automatic selection of initial conditions. We analyze our system with a large blinded study in a commercial climbing gym in cooperation with experienced climbers and expert route setters. Our results show that our system is capable of assisting a human setter in producing routes that are at least as good as, and in some cases better than, those produced traditionally.

  2. libdrdc: software standards library

    NASA Astrophysics Data System (ADS)

    Erickson, David; Peng, Tie

    2008-04-01

    This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.

  3. Dynamical approach to the cosmological constant.

    PubMed

    Mukohyama, Shinji; Randall, Lisa

    2004-05-28

    We consider a dynamical approach to the cosmological constant. There is a scalar field with a potential whose minimum occurs at a generic, but negative, value for the vacuum energy, and it has a nonstandard kinetic term whose coefficient diverges at zero curvature as well as the standard kinetic term. Because of the divergent coefficient of the kinetic term, the lowest energy state is never achieved. Instead, the cosmological constant automatically stalls at or near zero. The merit of this model is that it is stable under radiative corrections and leads to stable dynamics, despite the singular kinetic term. The model is not complete, however, in that some reheating is required. Nonetheless, our approach can at the very least reduce fine-tuning by 60 orders of magnitude or provide a new mechanism for sampling possible cosmological constants and implementing the anthropic principle.

  4. Automatic control of the effluent turbidity from a chemically enhanced primary treatment with microsieving.

    PubMed

    Väänänen, J; Memet, S; Günther, T; Lilja, M; Cimbritz, M; la Cour Jansen, J

    2017-10-01

    For chemically enhanced primary treatment (CEPT) with microsieving, a feedback proportional integral controller combined with a feedforward compensator was used in large pilot scale to control effluent water turbidity to desired set points. The effluent water turbidity from the microsieve was maintained at various set points in the range 12-80 NTU basically independent for a number of studied variations in influent flow rate and influent wastewater compositions. Effluent turbidity was highly correlated with effluent chemical oxygen demand (COD). Thus, for CEPT based on microsieving, controlling the removal of COD was possible. Thereby incoming carbon can be optimally distributed between biological nitrogen removal and anaerobic digestion for biogas production. The presented method is based on common automation and control strategies; therefore fine tuning and optimization for specific requirements are simplified compared to model-based dosing control.

  5. Robust spike sorting of retinal ganglion cells tuned to spot stimuli.

    PubMed

    Ghahari, Alireza; Badea, Tudor C

    2016-08-01

    We propose an automatic spike sorting approach for the data recorded from a microelectrode array during visual stimulation of wild type retinas with tiled spot stimuli. The approach first detects individual spikes per electrode by their signature local minima. With the mixture probability distribution of the local minima estimated afterwards, it applies a minimum-squared-error clustering algorithm to sort the spikes into different clusters. A template waveform for each cluster per electrode is defined, and a number of reliability tests are performed on it and its corresponding spikes. Finally, a divisive hierarchical clustering algorithm is used to deal with the correlated templates per cluster type across all the electrodes. According to the measures of performance of the spike sorting approach, it is robust even in the cases of recordings with low signal-to-noise ratio.

  6. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  7. Clustering of color map pixels: an interactive approach

    NASA Astrophysics Data System (ADS)

    Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo

    2003-12-01

    The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.

  8. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  9. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  10. A Discriminative Sentence Compression Method as Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.

  11. Model Predictive Control of Type 1 Diabetes: An in Silico Trial

    PubMed Central

    Magni, Lalo; Raimondo, Davide M.; Bossi, Luca; Man, Chiara Dalla; De Nicolao, Giuseppe; Kovatchev, Boris; Cobelli, Claudio

    2007-01-01

    Background The development of artificial pancreas has received a new impulse from recent technological advancements in subcutaneous continuous glucose monitoring and subcutaneous insulin pump delivery systems. However, the availability of innovative sensors and actuators, although essential, does not guarantee optimal glycemic regulation. Closed-loop control of blood glucose levels still poses technological challenges to the automatic control expert, most notable of which are the inevitable time delays between glucose sensing and insulin actuation. Methods A new in silico model is exploited for both design and validation of a linear model predictive control (MPC) glucose control system. The starting point is a recently developed meal glucose–insulin model in health, which is modified to describe the metabolic dynamics of a person with type 1 diabetes mellitus. The population distribution of the model parameters originally obtained in healthy 204 patients is modified to describe diabetic patients. Individual models of virtual patients are extracted from this distribution. A discrete-time MPC is designed for all the virtual patients from a unique input–output-linearized approximation of the full model based on the average population values of the parameters. The in silico trial simulates 4 consecutive days, during which the patient receives breakfast, lunch, and dinner each day. Results Provided that the regulator undergoes some individual tuning, satisfactory results are obtained even if the control design relies solely on the average patient model. Only the weight on the glucose concentration error needs to be tuned in a quite straightforward and intuitive way. The ability of the MPC to take advantage of meal announcement information is demonstrated. Imperfect knowledge of the amount of ingested glucose causes only marginal deterioration of performance. In general, MPC results in better regulation than proportional integral derivative, limiting significantly the oscillation of glucose levels. Conclusions The proposed in silico trial shows the potential of MPC for artificial pancreas design. The main features are a capability to consider meal announcement information, delay compensation, and simplicity of tuning and implementation. PMID:19885152

  12. Robotic excavator trajectory control using an improved GA based PID controller

    NASA Astrophysics Data System (ADS)

    Feng, Hao; Yin, Chen-Bo; Weng, Wen-wen; Ma, Wei; Zhou, Jun-jing; Jia, Wen-hua; Zhang, Zi-li

    2018-05-01

    In order to achieve excellent trajectory tracking performances, an improved genetic algorithm (IGA) is presented to search for the optimal proportional-integral-derivative (PID) controller parameters for the robotic excavator. Firstly, the mathematical model of kinematic and electro-hydraulic proportional control system of the excavator are analyzed based on the mechanism modeling method. On this basis, the actual model of the electro-hydraulic proportional system are established by the identification experiment. Furthermore, the population, the fitness function, the crossover probability and mutation probability of the SGA are improved: the initial PID parameters are calculated by the Ziegler-Nichols (Z-N) tuning method and the initial population is generated near it; the fitness function is transformed to maintain the diversity of the population; the probability of crossover and mutation are adjusted automatically to avoid premature convergence. Moreover, a simulation study is carried out to evaluate the time response performance of the proposed controller, i.e., IGA based PID against the SGA and Z-N based PID controllers with a step signal. It was shown from the simulation study that the proposed controller provides the least rise time and settling time of 1.23 s and 1.81 s, respectively against the other tested controllers. Finally, two types of trajectories are designed to validate the performances of the control algorithms, and experiments are performed on the excavator trajectory control experimental platform. It was demonstrated from the experimental work that the proposed IGA based PID controller improves the trajectory accuracy of the horizontal line and slope line trajectories by 23.98% and 23.64%, respectively in comparison to the SGA tuned PID controller. The results further indicate that the proposed IGA tuning based PID controller is effective for improving the tracking accuracy, which may be employed in the trajectory control of an actual excavator.

  13. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  14. Real-time speech gisting for ATC applications

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    1995-06-01

    Command and control within the ATC environment remains primarily voice-based. Hence, automatic real time, speaker independent, continuous speech recognition (CSR) has many obvious applications and implied benefits to the ATC community: automated target tagging, aircraft compliance monitoring, controller training, automatic alarm disabling, display management, and many others. However, while current state-of-the-art CSR systems provide upwards of 98% word accuracy in laboratory environments, recent low-intrusion experiments in the ATCT environments demonstrated less than 70% word accuracy in spite of significant investments in recognizer tuning. Acoustic channel irregularities and controller/pilot grammar verities impact current CSR algorithms at their weakest points. It will be shown herein, however, that real time context- and environment-sensitive gisting can provide key command phrase recognition rates of greater than 95% using the same low-intrusion approach. The combination of real time inexact syntactic pattern recognition techniques and a tight integration of CSR, gisting, and ATC database accessor system components is the key to these high phase recognition rates. A system concept for real time gisting in the ATC context is presented herein. After establishing an application context, discussion presents a minimal CSR technology context then focuses on the gisting mechanism, desirable interfaces into the ATCT database environment, and data and control flow within the prototype system. Results of recent tests for a subset of the functionality are presented together with suggestions for further research.

  15. Midbrain-Driven Emotion and Reward Processing in Alcoholism

    PubMed Central

    Müller-Oehring, E M; Jung, Y-C; Sullivan, E V; Hawkes, W C; Pfefferbaum, A; Schulte, T

    2013-01-01

    Alcohol dependence is associated with impaired control over emotionally motivated actions, possibly associated with abnormalities in the frontoparietal executive control network and midbrain nodes of the reward network associated with automatic attention. To identify differences in the neural response to alcohol-related word stimuli, 26 chronic alcoholics (ALC) and 26 healthy controls (CTL) performed an alcohol-emotion Stroop Match-to-Sample task during functional MR imaging. Stroop contrasts were modeled for color-word incongruency (eg, word RED printed in green) and for alcohol (eg, BEER), positive (eg, HAPPY) and negative (eg, MAD) emotional word content relative to congruent word conditions (eg, word RED printed in red). During color-Stroop processing, ALC and CTL showed similar left dorsolateral prefrontal activation, and CTL, but not ALC, deactivated posterior cingulate cortex/cuneus. An interaction revealed a dissociation between alcohol-word and color-word Stroop processing: ALC activated midbrain and parahippocampal regions more than CTL when processing alcohol-word relative to color-word conditions. In ALC, the midbrain region was also invoked by negative emotional Stroop words thereby showing significant overlap of this midbrain activation for alcohol-related and negative emotional processing. Enhanced midbrain activation to alcohol-related words suggests neuroadaptation of dopaminergic midbrain systems. We speculate that such tuning is normally associated with behavioral conditioning to optimize responses but here contributed to automatic bias to alcohol-related stimuli. PMID:23615665

  16. Automated red blood cells extraction from holographic images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-10-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.

  17. Automatic control of the NMB level in general anaesthesia with a switching total system mass control strategy.

    PubMed

    Teixeira, Miguel; Mendonça, Teresa; Rocha, Paula; Rabiço, Rui

    2014-12-01

    This paper presents a model based switching control strategy to drive the neuromuscular blockade (NMB) level of patients undergoing general anesthesia to a predefined reference. A single-input single-output Wiener system with only two parameters is used to model the effect of two different muscle relaxants, atracurium and rocuronium, and a switching controller is designed based on a bank of total system mass control laws. Each of such laws is tuned for an individual model from a bank chosen to represent the behavior of the whole population. The control law to be applied at each instant corresponds to the model whose NMB response is closer to the patient's response. Moreover a scheme to improve the reference tracking quality based on the analysis of the patient's response, as well as, a comparison between the switching strategy and the Extended Kalman Kilter (EKF) technique are presented. The results are illustrated by means of several simulations, where switching shows to provide good results, both in theory and in practice, with a desirable reference tracking. The reference tracking improvement technique is able to produce a better reference tracking. Also, this technique showed a better performance than the (EKF). Based on these results, the switching control strategy with a bank of total system mass control laws proved to be robust enough to be used as an automatic control system for the NMB level.

  18. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images

    PubMed Central

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-01-01

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665

  19. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    PubMed

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  20. Automated red blood cells extraction from holographic images using fully convolutional neural networks

    PubMed Central

    Yi, Faliu; Moon, Inkyu; Javidi, Bahram

    2017-01-01

    In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078

  1. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  2. Automatic anatomy recognition on CT images with pathology

    NASA Astrophysics Data System (ADS)

    Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.

  3. Midbrain-driven emotion and reward processing in alcoholism.

    PubMed

    Müller-Oehring, E M; Jung, Y-C; Sullivan, E V; Hawkes, W C; Pfefferbaum, A; Schulte, T

    2013-09-01

    Alcohol dependence is associated with impaired control over emotionally motivated actions, possibly associated with abnormalities in the frontoparietal executive control network and midbrain nodes of the reward network associated with automatic attention. To identify differences in the neural response to alcohol-related word stimuli, 26 chronic alcoholics (ALC) and 26 healthy controls (CTL) performed an alcohol-emotion Stroop Match-to-Sample task during functional MR imaging. Stroop contrasts were modeled for color-word incongruency (eg, word RED printed in green) and for alcohol (eg, BEER), positive (eg, HAPPY) and negative (eg, MAD) emotional word content relative to congruent word conditions (eg, word RED printed in red). During color-Stroop processing, ALC and CTL showed similar left dorsolateral prefrontal activation, and CTL, but not ALC, deactivated posterior cingulate cortex/cuneus. An interaction revealed a dissociation between alcohol-word and color-word Stroop processing: ALC activated midbrain and parahippocampal regions more than CTL when processing alcohol-word relative to color-word conditions. In ALC, the midbrain region was also invoked by negative emotional Stroop words thereby showing significant overlap of this midbrain activation for alcohol-related and negative emotional processing. Enhanced midbrain activation to alcohol-related words suggests neuroadaptation of dopaminergic midbrain systems. We speculate that such tuning is normally associated with behavioral conditioning to optimize responses but here contributed to automatic bias to alcohol-related stimuli.

  4. Protocol for validation of the 4AT, a rapid screening tool for delirium: a multicentre prospective diagnostic test accuracy study.

    PubMed

    Shenkin, Susan D; Fox, Christopher; Godfrey, Mary; Siddiqi, Najma; Goodacre, Steve; Young, John; Anand, Atul; Gray, Alasdair; Smith, Joel; Ryan, Tracy; Hanley, Janet; MacRaild, Allan; Steven, Jill; Black, Polly L; Boyd, Julia; Weir, Christopher J; MacLullich, Alasdair Mj

    2018-02-10

    Delirium is a severe neuropsychiatric syndrome of rapid onset, commonly precipitated by acute illness. It is common in older people in the emergency department (ED) and acute hospital, but greatly under-recognised in these and other settings. Delirium and other forms of cognitive impairment, particularly dementia, commonly coexist. There is a need for a rapid delirium screening tool that can be administered by a range of professional-level healthcare staff to patients with sensory or functional impairments in a busy clinical environment, which also incorporates general cognitive assessment. We developed the 4 'A's Test (4AT) for this purpose. This study's primary objective is to validate the 4AT against a reference standard. Secondary objectives include (1) comparing the 4AT with another widely used test (the Confusion Assessment Method (CAM)); (2) determining if the 4AT is sensitive to general cognitive impairment; (3) assessing if 4AT scores predict outcomes, including (4) a health economic analysis. 900 patients aged 70 or over in EDs or acute general medical wards will be recruited in three sites (Edinburgh, Bradford and Sheffield) over 18 months. Each patient will undergo a reference standard delirium assessment and will be randomised to assessment with either the 4AT or the CAM. At 12 weeks, outcomes (length of stay, institutionalisation and mortality) and resource utilisation will be collected by a questionnaire and via the electronic patient record. Ethical approval was granted in Scotland and England. The study involves administering tests commonly used in clinical practice. The main ethical issues are the essential recruitment of people without capacity. Dissemination is planned via publication in high impact journals, presentation at conferences, social media and the website www.the4AT.com. ISRCTN53388093; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  6. 'Requirements for an Automatic Collision Avoidance System'

    NASA Astrophysics Data System (ADS)

    Cooper, R. W.

    I am disturbed by Perkins and Redfern's paper in the May 1996 Journal.The COLREGS have been developed over many years of tinkering and tuning. They are not designed only for educated European masters driving big merchant ships. The users; fishermen, yachtsmen, oarsmen, tugmasters, Rhine bargemasters, and Yangtse junkmen, etc, are from all educational standards and from all the world's cultures. The COLREGS are now well-known. There is such a huge investment of time and effort, of learning by millions of different people, that the prospect of tampering with their fundamentals is horrific, even if it would suit a small class of user belonging to the more advanced countries. Change is painful, and too much change, too fast, kills. Compare the practical decision not to change the side of the road on which we British drive: it might be convenient, but it would cost too much. The same applies to the COLREGS.

  7. Fitting perception in and to cognition.

    PubMed

    Goldstone, Robert L; de Leeuw, Joshua R; Landy, David H

    2015-02-01

    Perceptual modules adapt at evolutionary, lifelong, and moment-to-moment temporal scales to better serve the informational needs of cognizers. Perceptual learning is a powerful way for an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed. Mechanisms like predictive coding, categorical perception, and action-informed vision allow our perceptual systems to interface well with cognition by generating perceptual outputs that are systematically guided by how they will be used. In classic conceptions of perceptual modules, people have access to the modules' outputs but no ability to adjust their internal workings. However, humans routinely and strategically alter their perceptual systems via training regimes that have predictable and specific outcomes. In fact, employing a combination of strategic and automatic devices for adapting perception is one of the most promising approaches to improving cognition. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Self-starting, self-regulating Fourier domain mode locked fiber laser for OCT imaging

    PubMed Central

    Murari, Kartikeya; Mavadia, Jessica; Xi, Jiefeng; Li, Xingde

    2011-01-01

    We present a Fourier domain mode locking (FDML) fiber laser with a feedback loop allowing automatic startup without a priori knowledge of the fundamental drive frequency. The feedback can also regulate the drive frequency making the source robust against environmental variations. A control system samples the energy of the light traversing the FDML cavity and uses a voltage controlled oscillator (VCO) to drive the tunable fiber Fabry-Perot filter in order to maximize that energy. We demonstrate a prototype self-starting, self-regulating FDML operating at 40 kHz with a full width tuning range of 140 nm around 1305 nm and a power output of ~40 mW. The laser starts up with no operator intervention in less than 5 seconds and exhibits improved spectral stability over a conventional FDML source. In OCT applications the source achieved over 120 dB detection sensitivity and an ~8.9-µm axial resolution. PMID:21750775

  9. Rapid tuning shifts in human auditory cortex enhance speech intelligibility

    PubMed Central

    Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.

    2016-01-01

    Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965

  10. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    PubMed

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Multiplexed electronically programmable multimode ionization detector for chromatography

    DOEpatents

    Wise, M.B.; Buchanan, M.V.

    1988-05-19

    Method and apparatus for detecting and differentiating organic compounds based on their electron affinity. An electron capture detector cell (ECD) is operated in a plurality of multiplexed electronically programmable operating modes to alter the detector response during a single sampling cycle to acquire multiple simultaneous chromatograms corresponding to each of the different operating modes. The cell is held at a constant subatmospheric pressure while the electron collection bias voltage applied to the cell is modulated electronically to allow acquisition of multiple chromatograms for a single sample elution from a chromatograph representing three distinctly different response modes. A system is provided which automatically controls the programmed application of bias pulses at different intervals and/or amplitudes to switch the detector from an ionization mode to the electron capture mode and various degrees therebetween to provide an improved means of tuning an ECD for multimode detection and improved specificity. 6 figs.

  12. Multiplexed electronically programmable multimode ionization detector for chromatography

    DOEpatents

    Wise, Marcus B.; Buchanan, Michelle V.

    1989-01-01

    Method and apparatus for detecting and differentiating organic compounds based on their electron affinity. An electron capture detector cell (ECD) is operated in a plurality of multiplexed electroncially programmable operating modes to alter the detector response during a single sampling cycle to acquire multiple simultaneous chromatograms corresponding to each of the different operating modes. The cell is held at a constant subatmospheric pressure while the electron collection bias voltage applied to the cell is modulated electronically to allow acquisition of multiple chromatograms for a single sample elution from a chromatograph representing three distinctly different response modes. A system is provided which automatically controls the programmed application of bias pulses at different intervals and/or amplitudes to switch the detector from an ionization mode to the electron capture mode and various degrees therebetween to provide an improved means of tuning an ECD for multimode detection and improved specificity.

  13. High-efficency stable 213-nm generation for LASIK application

    NASA Astrophysics Data System (ADS)

    Wang, Zhenglin; Alameh, Kamal; Zheng, Rong

    2005-01-01

    213nm Solid-state laser technology provides an alternative method to replace toxic excimer laser in LASIK system. In this paper, we report a compact fifth harmonic generation system to generate high pulse energy 213nm laser from Q-switched Nd:YAG laser for LASIK application based on three stages harmonic generation procedures. A novel crystal housing was specifically designed to hold the three crystals with each crystal has independent, precise angular adjustment structure and automatic tuning control. The crystal temperature is well maintained at ~130°C to improve harmonic generation stability and crystal operation lifetime. An output pulse energy 35mJ is obtained at 213nm, corresponding to total conversion efficiency ~10% from 1064nm pump laser. In system verification tests, the 213nm output power drops less than 5% after 5 millions pulse shots and no significant damage appears in the crystals.

  14. Aircraft-mounted crash-activated transmitter device

    NASA Technical Reports Server (NTRS)

    Manoli, R.; Ulrich, B. R. (Inventor)

    1976-01-01

    An aircraft crash location transmitter tuned to transmit on standard emergency frequencies is reported that is shock mounted in a sealed circular case atop the tail of an aircraft by means of a shear pin designed to fail under a G loading associated with a crash situation. The antenna for the transmitter is a metallic spring blade coiled like a spiral spring around the outside of the circular case. A battery within the case for powering the transmitter is kept trickle charged from the electrical system of the aircraft through a break away connector on the case. When a crash occurs, the resultant ejection of the case from the tail due to a failure of the shear pin releases the free end of the antenna which automatically uncoils. The accompanying separation of the connector effects closing of the transmitter key and results in commencement of transmission.

  15. Constructing Temporally Extended Actions through Incremental Community Detection

    PubMed Central

    Li, Ge

    2018-01-01

    Hierarchical reinforcement learning works on temporally extended actions or skills to facilitate learning. How to automatically form such abstraction is challenging, and many efforts tackle this issue in the options framework. While various approaches exist to construct options from different perspectives, few of them concentrate on options' adaptability during learning. This paper presents an algorithm to create options and enhance their quality online. Both aspects operate on detected communities of the learning environment's state transition graph. We first construct options from initial samples as the basis of online learning. Then a rule-based community revision algorithm is proposed to update graph partitions, based on which existing options can be continuously tuned. Experimental results in two problems indicate that options from initial samples may perform poorly in more complex environments, and our presented strategy can effectively improve options and get better results compared with flat reinforcement learning. PMID:29849543

  16. Enhanced Higgs boson to τ(+)τ(-) search with deep learning.

    PubMed

    Baldi, P; Sadowski, P; Whiteson, D

    2015-03-20

    The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5σ significance barrier without more data. Deep learning techniques have the potential to increase the statistical power of this analysis by automatically learning complex, high-level data representations. In this work, deep neural networks are used to detect the decay of the Higgs boson to a pair of tau leptons. A Bayesian optimization algorithm is used to tune the network architecture and training algorithm hyperparameters, resulting in a deep network of eight nonlinear processing layers that improves upon the performance of shallow classifiers even without the use of features specifically engineered by physicists for this application. The improvement in discovery significance is equivalent to an increase in the accumulated data set of 25%.

  17. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features.

    PubMed

    Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna

    2015-01-01

    When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.

  18. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features

    PubMed Central

    Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna

    2016-01-01

    When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content. PMID:26779055

  19. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    PubMed

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  20. Hyperspectral microscopic analysis of normal, benign and carcinoma microarray tissue sections

    NASA Astrophysics Data System (ADS)

    Maggioni, Mauro; Davis, Gustave L.; Warner, Frederick J.; Geshwind, Frank B.; Coppi, Andreas C.; DeVerse, Richard A.; Coifman, Ronald R.

    2006-02-01

    We apply a unique micro-optoelectromechanical tuned light source and new algorithms to the hyper-spectral microscopic analysis of human colon biopsies. The tuned light prototype (Plain Sight Systems Inc.) transmits any combination of light frequencies, range 440nm 700nm, trans-illuminating H and E stained tissue sections of normal (N), benign adenoma (B) and malignant carcinoma (M) colon biopsies, through a Nikon Biophot microscope. Hyper-spectral photomicrographs, randomly collected 400X magnication, are obtained with a CCD camera (Sensovation) from 59 different patient biopsies (20 N, 19 B, 20 M) mounted as a microarray on a single glass slide. The spectra of each pixel are normalized and analyzed to discriminate among tissue features: gland nuclei, gland cytoplasm and lamina propria/lumens. Spectral features permit the automatic extraction of 3298 nuclei with classification as N, B or M. When nuclei are extracted from each of the 59 biopsies the average classification among N, B and M nuclei is 97.1%; classification of the biopsies, based on the average nuclei classification, is 100%. However, when the nuclei are extracted from a subset of biopsies, and the prediction is made on nuclei in the remaining biopsies, there is a marked decrement in performance to 60% across the 3 classes. Similarly the biopsy classification drops to 54%. In spite of these classification differences, which we believe are due to instrument and biopsy normalization issues, hyper-spectral analysis has the potential to achieve diagnostic efficiency needed for objective microscopic diagnosis.

  1. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues

    PubMed Central

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966

  2. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2017-01-01

    Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497

  3. LCC: Light Curves Classifier

    NASA Astrophysics Data System (ADS)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  4. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  5. Modifying the Genetic Regulation of Bone and Cartilage Cells and Associated Tissue by EMF Stimulation Fields and Uses Thereof

    NASA Technical Reports Server (NTRS)

    Goodwin, Thomas J. (Inventor); Shackelford, Linda C. (Inventor)

    2014-01-01

    An apparatus and method to modify the genetic regulation of mammalian tissue, bone, or any combination. The method may be comprised of the steps of tuning at least one predetermined profile associated with at least one time-varying stimulation field thereby resulting in at least one tuned time-varying stimulation field comprised of at least one tuned predetermined profile, wherein said at least one tuned predetermined profile is comprised of a plurality of tuned predetermined figures of merit and is controllable through at least one of said plurality of tuned predetermined figures of merit, wherein said plurality of predetermined tuned figures of merit is comprised of a tuned B-Field magnitude, tuned rising slew rate, tuned rise time, tuned falling slew rate, tuned fall time, tuned frequency, tuned wavelength, and tuned duty cycle; and exposing mammalian chondrocytes, osteoblasts, osteocytes, osteoclasts, nucleus pulposus, associated tissue, or any combination to said at least one tuned time-varying stimulation field comprised of said at least one tuned predetermined profile for a predetermined tuned exposure time or plurality of tuned exposure time sequences.

  6. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  7. WE-AB-209-05: Development of an Ultra-Fast High Quality Whole Breast Radiotherapy Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Y; Li, T; Yoo, S

    2016-06-15

    Purpose: To enable near-real-time (<20sec) and interactive planning without compromising quality for whole breast RT treatment planning using tangential fields. Methods: Whole breast RT plans from 20 patients treated with single energy (SE, 6MV, 10 patients) or mixed energy (ME, 6/15MV, 10 patients) were randomly selected for model training. Additional 20 cases were used as validation cohort. The planning process for a new case consists of three fully automated steps:1. Energy Selection. A classification model automatically selects energy level. To build the energy selection model, principle component analysis (PCA) was applied to the digital reconstructed radiographs (DRRs) of training casesmore » to extract anatomy-energy relationship.2. Fluence Estimation. Once energy is selected, a random forest (RF) model generates the initial fluence. This model summarizes the relationship between patient anatomy’s shape based features and the output fluence. 3. Fluence Fine-tuning. This step balances the overall dose contribution throughout the whole breast tissue by automatically selecting reference points and applying centrality correction. Fine-tuning works at beamlet-level until the dose distribution meets clinical objectives. Prior to finalization, physicians can also make patient-specific trade-offs between target coverage and high-dose volumes.The proposed method was validated by comparing auto-plans with manually generated clinical-plans using Wilcoxon Signed-Rank test. Results: In 19/20 cases the model suggested the same energy combination as clinical-plans. The target volume coverage V100% was 78.1±4.7% for auto-plans, and 79.3±4.8% for clinical-plans (p=0.12). Volumes receiving 105% Rx were 69.2±78.0cc for auto-plans compared to 83.9±87.2cc for clinical-plans (p=0.13). The mean V10Gy, V20Gy of the ipsilateral lung was 24.4±6.7%, 18.6±6.0% for auto plans and 24.6±6.7%, 18.9±6.1% for clinical-plans (p=0.04, <0.001). Total computational time for auto-plans was < 20s. Conclusion: We developed an automated method that generates breast radiotherapy plans with accurate energy selection, similar target volume coverage, reduced hotspot volumes, and significant reduction in planning time, allowing for near-real-time planning.« less

  8. A Smart Toy to Enhance the Decision-Making Process at Children’s Psychomotor Delay Screenings: A Pilot Study

    PubMed Central

    2017-01-01

    Background EDUCERE (“Ubiquitous Detection Ecosystem to Care and Early Stimulation for Children with Developmental Disorders”) is an ecosystem for ubiquitous detection, care, and early stimulation of children with developmental disorders. The objectives of this Spanish government-funded research and development project are to investigate, develop, and evaluate innovative solutions to detect changes in psychomotor development through the natural interaction of children with toys and everyday objects, and perform stimulation and early attention activities in real environments such as home and school. Thirty multidisciplinary professionals and three nursery schools worked in the EDUCERE project between 2014 and 2017 and they obtained satisfactory results. Related to EDUCERE, we found studies based on providing networks of connected smart objects and the interaction between toys and social networks. Objective This research includes the design, implementation, and validation of an EDUCERE smart toy aimed to automatically detect delays in psychomotor development. The results from initial tests led to enhancing the effectiveness of the original design and deployment. The smart toy, based on stackable cubes, has a data collector module and a smart system for detection of developmental delays, called the EDUCERE developmental delay screening system (DDSS). Methods The pilot study involved 65 toddlers aged between 23 and 37 months (mean=29.02, SD 3.81) who built a tower with five stackable cubes, designed by following the EDUCERE smart toy model. As toddlers made the tower, sensors in the cubes sent data to a collector module through a wireless connection. All trials were video-recorded for further analysis by child development experts. After watching the videos, experts scored the performance of the trials to compare and fine-tune the interpretation of the data automatically gathered by the toy-embedded sensors. Results Judges were highly reliable in an interrater agreement analysis (intraclass correlation 0.961, 95% CI 0.937-0.967), suggesting that the process was successful to separate different levels of performance. A factor analysis of collected data showed that three factors, trembling, speed, and accuracy, accounted for 76.79% of the total variance, but only two of them were predictors of performance in a regression analysis: accuracy (P=.001) and speed (P=.002). The other factor, trembling (P=.79), did not have a significant effect on this dependent variable. Conclusions The EDUCERE DDSS is ready to use the regression equation obtained for the dependent variable “performance” as an algorithm for the automatic detection of psychomotor developmental delays. The results of the factor analysis are valuable to simplify the design of the smart toy by taking into account only the significant variables in the collector module. The fine-tuning of the toy process module will be carried out by following the specifications resulting from the analysis of the data to improve the efficiency and effectiveness of the product. PMID:28526666

  9. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  10. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs.

    PubMed

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-21

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.

  11. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Shen, C; Tian, Z

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less

  12. User-customized brain computer interfaces using Bayesian optimization

    NASA Astrophysics Data System (ADS)

    Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali

    2016-04-01

    Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.

  13. Sea slicks classification by synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Trivero, P.; Biamino, W.; Borasi, M.; Cavagnero, M.; Di Matteo, L.; Loreggia, D.

    2014-10-01

    An automatic system called OSAD (Oil Spill Automatic Detector), able to discriminate oil spills (OS) from similar features (look-alikes - LA) in SAR images, was developed some years ago. Slick detection is based on a probabilistic method (tuned with a training dataset defined by an expert photointerpreter) evaluating radiometric and geometric characteristics of the areas of interest. OSAD also provides wind field by analyzing SAR images. With the aim to completely classify sea slicks, recently a new procedure has been added. Dark areas are identified on the image and the wind is computed inside and outside for every area: if outside wind value is less than a threshold of 2 m/s it is impossible to evaluate if damping is due to a slick. On the other hand, if outside wind is higher than the threshold and the difference between inside and outside the dark area is lower than 1 m/s we consider this reduction as wind fluctuation. Wind difference higher than 1 m/s is interpreted as damping effect due to a slick; therefore the remaining dark spots are split in OS and LA by OSAD. LA are then analyzed and separated in "biogenic" or "anthropogenic" slicks following an analogous procedure. The system performances has been tested on C-band SAR images, in particular on images having spatial resolution so high to examine details near the coastline; the obtained results confirm the efficiency of the algorithm in the classification of four types of signatures usually found on the sea surface.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variablemore » objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.« less

  15. Clustering-based Feature Learning on Variable Stars

    NASA Astrophysics Data System (ADS)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    2016-04-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.

  16. Landslide Inventory Mapping from Bitemporal 10 m SENTINEL-2 Images Using Change Detection Based Markov Random Field

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Lu, P.; Li, Z.

    2018-04-01

    Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.

  17. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    PubMed

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  18. Using New Media to Reach Broad Audiences

    NASA Astrophysics Data System (ADS)

    Gay, P. L.

    2008-06-01

    The International Year of Astronomy New Media Working Group (IYA NMWG) has a singular mission: To flood the Internet with ways to learn about astronomy, interact with astronomers and astronomy content, and socially network with astronomy. Within each of these areas, we seek to build lasting programs and partnerships that will continue beyond 2009. Our weapon of choice is New Media. It is often easiest to define New Media by what it is not. Television, radio, print and their online redistribution of content are not New Media. Many forms of New Media start as user provided content and content infrastructures that answer that individual's creative whim in a way that is adopted by a broader audience. Classic examples include Blogs and Podcasts. This media is typically distributed through content specific websites and RSS feeds, which allow syndication. RSS aggregators (iTunes has audio and video aggregation abilities) allow subscribers to have content delivered to their computers automatically when they connect to the Internet. RSS technology is also being used in such creative ways as allowing automatically updating Google-maps that show the location of someone with an intelligent GPS system, and in sharing 100 word microblogs from anyone (Twitters) through a single feed. In this poster, we outline how the IYA NMWG plans to use New Media to reach target primary audiences of astronomy enthusiasts, image lovers, and amateur astronomers, as well as secondary audiences, including: science fiction fans, online gamers, and skeptics.

  19. Intelligent Machine Learning Approaches for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Sathyan, Anoop

    Machine Learning is a type of artificial intelligence that provides machines or networks the ability to learn from data without the need to explicitly program them. There are different kinds of machine learning techniques. This thesis discusses the applications of two of these approaches: Genetic Fuzzy Logic and Convolutional Neural Networks (CNN). Fuzzy Logic System (FLS) is a powerful tool that can be used for a wide variety of applications. FLS is a universal approximator that reduces the need for complex mathematics and replaces it with expert knowledge of the system to produce an input-output mapping using If-Then rules. The expert knowledge of a system can help in obtaining the parameters for small-scale FLSs, but for larger networks we will need to use sophisticated approaches that can automatically train the network to meet the design requirements. This is where Genetic Algorithms (GA) and EVE come into the picture. Both GA and EVE can tune the FLS parameters to minimize a cost function that is designed to meet the requirements of the specific problem. EVE is an artificial intelligence developed by Psibernetix that is trained to tune large scale FLSs. The parameters of an FLS can include the membership functions and rulebase of the inherent Fuzzy Inference Systems (FISs). The main issue with using the GFS is that the number of parameters in a FIS increase exponentially with the number of inputs thus making it increasingly harder to tune them. To reduce this issue, the FLSs discussed in this thesis consist of 2-input-1-output FISs in cascade (Chapter 4) or as a layer of parallel FISs (Chapter 7). We have obtained extremely good results using GFS for different applications at a reduced computational cost compared to other algorithms that are commonly used to solve the corresponding problems. In this thesis, GFSs have been designed for controlling an inverted double pendulum, a task allocation problem of clustering targets amongst a set of UAVs, a fire detection problem and the aircraft conflict resolution problem. During the last decade, CNNs have become increasingly popular in the domain of image and speech processing. CNNs have a lot more parameters compared to GFSs that are tuned using the back-propagation algorithm. CNNs typically have hundreds of thousands or maybe millions of parameters that are tuned using common cost functions such as integral squared error, softmax loss etc. Chapter 5 discusses a classification problem to classify images as humans or not and Chapter 6 discusses a regression task using CNN for producing an approximate near-optimal route for the Traveling Salesman Problem (TSP) which is regarded as one of the most complicated decision making problem. Both the GFS and CNN are used to develop intelligent systems specific to the application providing them computational efficiency, robustness in the face of uncertainties and scalability.

  20. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  1. Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.

    2014-07-01

    The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.

  2. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  3. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  4. Fuzzy self-learning control for magnetic servo system

    NASA Technical Reports Server (NTRS)

    Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.

    1994-01-01

    It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.

  5. Genomic data assimilation for estimating hybrid functional Petri net from time-course gene expression data.

    PubMed

    Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki

    2006-01-01

    We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.

  6. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment

    PubMed Central

    Koroušić Seljak, Barbara

    2017-01-01

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients. PMID:28653995

  7. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    PubMed

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'.We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets.

  8. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data

    PubMed Central

    2011-01-01

    Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters. The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'. We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets. PMID:21554689

  9. Automatic picker of P & S first arrivals and robust event locator

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R < 500 km) and "regional" (500 < R < 2000 km) routines and comprises: a) preliminary data processing (filtering and de-spiking), b) phase identificatio n, c) P, S first arrival picking, d) preliminary location and e) robust grid-search optimization procedure. Innovations concern phase identification, automa tic picking and teleseismic location. A platform free flexible Java interface was recently created, allowing easy parameter tuning and on/off switching to t he full-scale manual picking mode. Identification of the regional P and S phase is provided by choosing between the two largest peaks in the envelope curve. For automatic on-time estimation we utilize now ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early Warn ing reports, to the EMSC Real Time Bulletin. Initially reported some teleseismic location discrepancies have been eliminated by introduction of station corrections.

  10. Scaling analysis of the non-Abelian quasiparticle tunneling in Z}}_k FQH states

    NASA Astrophysics Data System (ADS)

    Li, Qi; Jiang, Na; Wan, Xin; Hu, Zi-Xiang

    2018-06-01

    Quasiparticle tunneling between two counter propagating edges through point contacts could provide information on its statistics. Previous study of the short distance tunneling displays a scaling behavior, especially in the conformal limit with zero tunneling distance. The scaling exponents for the non-Abelian quasiparticle tunneling exhibit some non-trivial behaviors. In this work, we revisit the quasiparticle tunneling amplitudes and their scaling behavior in a full range of the tunneling distance by putting the electrons on the surface of a cylinder. The edge–edge distance can be smoothly tuned by varying the aspect ratio for a finite size cylinder. We analyze the scaling behavior of the quasiparticles for the Read–Rezayi states for and 4 both in the short and long tunneling distance region. The finite size scaling analysis automatically gives us a critical length scale where the anomalous correction appears. We demonstrate this length scale is related to the size of the quasiparticle at which the backscattering between two counter propagating edges starts to be significant.

  11. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  12. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  13. Reconfigurable all-dielectric metasurface based on tunable chemical systems in aqueous solution.

    PubMed

    Yang, Xiaoqing; Zhang, Di; Wu, Shiyue; Yin, Yang; Li, Lanshuo; Cao, Kaiyuan; Huang, Kama

    2017-06-09

    Dynamic control transmission and polarization properties of electromagnetic (EM) wave propagation is investigated using chemical reconfigurable all-dielectric metasurface. The metasurface is composed of cross-shaped periodical teflon tubes and inner filled chemical systems (i.e., mixtures and chemical reaction) in aqueous solution. By tuning the complex permittivity of chemical systems, the reconfigurable metasurface can be easily achieved. The transmission properties of different incident polarized waves (i.e., linear and circular polarization) were simulated and experimentally measured for static ethanol solution as volume ratio changed. Both results indicated this metasurface can serve as either tunable FSS (Frequency Selective Surface) or tunable linear-to-circular/cross Polarization Converter at required frequency range. Based on the reconfigurable laws obtained from static solutions, we developed a dynamic dielectric system and researched a typical chemical reaction with time-varying permittivity filled in the tubes experimentally. It provides new ways for realizing automatic reconfiguration of metasurface by chemical reaction system with given variation laws of permittivity.

  14. Image contrast enhancement of Ni/YSZ anode during the slice-and-view process in FIB-SEM.

    PubMed

    Liu, Shu-Sheng; Takayama, Akiko; Matsumura, Syo; Koyama, Michihisa

    2016-03-01

    Focused ion beam-scanning electron microscopy (FIB-SEM) is a widely used and easily operational equipment for three-dimensional reconstruction with flexible analysis volume. It has been using successfully and increasingly in the field of solid oxide fuel cell. However, the phase contrast of the SEM images is indistinct in many cases, which will bring difficulties to the image processing. Herein, the phase contrast of a conventional Ni/yttria stabilized zirconia anode is tuned in an FIB-SEM with In-Lens secondary electron (SE) and backscattered electron detectors. Two accessories, tungsten probe and carbon nozzle, are inserted during the observation. The former has no influence on the contrast. When the carbon nozzle is inserted, best and distinct contrast can be obtained by In-Lens SE detector. This method is novel for contrast enhancement. Phase segmentation of the image can be automatically performed. The related mechanism for different images is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  15. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.

  16. An investigation of implicit turbulence modeling for laminar-turbulent transition in natural convection

    NASA Astrophysics Data System (ADS)

    Li, Chunggang; Tsubokura, Makoto; Wang, Weihsiang

    2017-11-01

    The automatic dissipation adjustment (ADA) model based on truncated Navier-Stokes equations is utilized to investigate the feasibility of using implicit large eddy simulation (ILES) with ADA model on the transition in natural convection. Due to the high Rayleigh number coming from the larger temperature difference (300K), Roe scheme modified for low Mach numbers coordinating ADA model is used to resolve the complicated flow field. Based on the qualitative agreement of the comparisons with DNS and experimental results and the capability of numerically predicating a -3 decay law for the temporal power spectrum of the temperature fluctuation, this study thus validates the feasibility of ILES with ADA model on turbulent natural convection. With the advantages of ease of implementation because no explicit modeling terms are needed and nearly free of tuning parameters, ADA model offers to become a promising tool for turbulent thermal convection. Part of the results is obtained using the K computer at the RIKEN Advanced Institute for Computational Science (Proposal number hp160232).

  17. Age and gender classification in the wild with unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Wan, Lihong; Huo, Hong; Fang, Tao

    2017-03-01

    Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.

  18. Flight deck benefits of integrated data link communication

    NASA Technical Reports Server (NTRS)

    Waller, Marvin C.

    1992-01-01

    A fixed-base, piloted simulation study was conducted to determine the operational benefits that result when air traffic control (ATC) instructions are transmitted to the deck of a transport aircraft over a digital data link. The ATC instructions include altitude, airspeed, heading, radio frequency, and route assignment data. The interface between the flight deck and the data link was integrated with other subsystems of the airplane to facilitate data management. Data from the ATC instructions were distributed to the flight guidance and control system, the navigation system, and an automatically tuned communication radio. The co-pilot initiated the automation-assisted data distribution process. Digital communications and automated data distribution were compared with conventional voice radio communication and manual input of data into other subsystems of the simulated aircraft. Less time was required in the combined communication and data management process when data link ATC communication was integrated with the other subsystems. The test subjects, commercial airline pilots, provided favorable evaluations of both the digital communication and data management processes.

  19. X-ray phase contrast tomography from whole organ down to single cells

    NASA Astrophysics Data System (ADS)

    Krenkel, Martin; Töpperwien, Mareike; Bartels, Matthias; Lingor, Paul; Schild, Detlev; Salditt, Tim

    2014-09-01

    We use propagation based hard x-ray phase contrast tomography to explore the three dimensional structure of neuronal tissues from the organ down to sub-cellular level, based on combinations of synchrotron radiation and laboratory sources. To this end a laboratory based microfocus tomography setup has been built in which the geometry was optimized for phase contrast imaging and tomography. By utilizing phase retrieval algorithms, quantitative reconstructions can be obtained that enable automatic renderings without edge artifacts. A high brightness liquid metal microfocus x-ray source in combination with a high resolution detector yielding a resolution down to 1.5 μm. To extend the method to nanoscale resolution we use a divergent x-ray waveguide beam geometry at the synchrotron. Thus, the magnification can be easily tuned by placing the sample at different defocus distances. Due to the small Fresnel numbers in this geometry the measured images are of holographic nature which poses a challenge in phase retrieval.

  20. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  1. Integration of minisolenoids in microfluidic device for magnetic bead-based immunoassays

    NASA Astrophysics Data System (ADS)

    Liu, Yan-Jun; Guo, Shi-Shang; Zhang, Zhi-Ling; Huang, Wei-Hua; Baigl, Damien; Chen, Yong; Pang, Dai-Wen

    2007-10-01

    Microfluidic devices with integrated minisolenoids, microvalves, and channels have been fabricated for fast and low-volume immunoassay using superparamagnetic beads and well-known surface bioengineering protocols. A magnetic reaction area can be formed in the microchannel, featuring a high surface-to-volume ratio and low diffusion distances for the reagents to the bead surface. Such a method has the obvious advantage of easy implementation at low cost. Moreover, the minisolenoids can be switched on or off and the magnetic field intensity can be tuned on demand. Fluids can be manipulated by controlling the integrated air-pressure-actuated microvalves. Accordingly, magnetic bead-based immunoassay, as a typical example of biochemical detection and analysis, has been successfully performed on the integrated microfluidic device automatically in longitudinal mode. With a sample consumption of 0.5μl and a total assay time of less than 15min, goat immunoglobulin G was detected and the method exhibited a detection limit of 4.7ng/ml.

  2. Applying deep neural networks to HEP job classification

    NASA Astrophysics Data System (ADS)

    Wang, L.; Shi, J.; Yan, X.

    2015-12-01

    The cluster of IHEP computing center is a middle-sized computing system which provides 10 thousands CPU cores, 5 PB disk storage, and 40 GB/s IO throughput. Its 1000+ users come from a variety of HEP experiments. In such a system, job classification is an indispensable task. Although experienced administrator can classify a HEP job by its IO pattern, it is unpractical to classify millions of jobs manually. We present how to solve this problem with deep neural networks in a supervised learning way. Firstly, we built a training data set of 320K samples by an IO pattern collection agent and a semi-automatic process of sample labelling. Then we implemented and trained DNNs models with Torch. During the process of model training, several meta-parameters was tuned with cross-validations. Test results show that a 5- hidden-layer DNNs model achieves 96% precision on the classification task. By comparison, it outperforms a linear model by 8% precision.

  3. Electrical capacitance clearanceometer

    NASA Technical Reports Server (NTRS)

    Hester, Norbert J. (Inventor); Hornbeck, Charles E. (Inventor); Young, Joseph C. (Inventor)

    1992-01-01

    A hot gas turbine engine capacitive probe clearanceometer is employed to measure the clearance gap or distance between blade tips on a rotor wheel and its confining casing under operating conditions. A braze sealed tip of the probe carries a capacitor electrode which is electrically connected to an electrical inductor within the probe which is inserted into a turbine casing to position its electrode at the inner surface of the casing. Electrical power is supplied through a voltage controlled variable frequency oscillator having a tuned circuit in which the probe is a component. The oscillator signal is modulated by a change in electrical capacitance between the probe electrode and a passing blade tip surface while an automatic feedback correction circuit corrects oscillator signal drift. A change in distance between a blade tip and the probe electrode is a change in capacitance therebetween which frequency modulates the oscillator signal. The modulated oscillator signal which is then processed through a phase detector and related circuitry to provide an electrical signal is proportional to the clearance gap.

  4. Meteo-marine parameters for highly variable environment in coastal regions from satellite radar images

    NASA Astrophysics Data System (ADS)

    Pleskachevsky, A. L.; Rosenthal, W.; Lehner, S.

    2016-09-01

    The German Bight of the North Sea is the area with highly variable sea state conditions, intensive ship traffic and with a high density of offshore installations, e.g. wind farms in use and under construction. Ship navigation and the docking on offshore constructions is impeded by significant wave heights HS > 1.3 m. For these reasons, improvements are required in recognition and forecasting of sea state HS in the range 0-3 m. Thus, this necessitates the development of new methods to determine the distribution of meteo-marine parameters from remote sensing data with an accuracy of decimetres for HS. The operationalization of these methods then allows the robust automatic processing in near real time (NRT) to support forecast agencies by providing validations for model results. A new empirical algorithm XWAVE_C (C = coastal) for estimation of significant wave height from X-band satellite-borne Synthetic Aperture Radar (SAR) data has been developed, adopted for coastal applications using TerraSAR-X (TS-X) and Tandem-X (TD-X) satellites in the German Bight and implemented into the Sea Sate Processor (SSP) for fully automatic processing for NRT services. The algorithm is based on the spectral analysis of subscenes and the model function uses integrated image spectra parameters as well as local wind information from the analyzed subscene. The algorithm is able to recognize and remove the influence of non-sea state produced signals in the Wadden Sea areas such as dry sandbars as well as nonlinear SAR image distortions produced by e.g. short wind waves and breaking waves. Also parameters of very short waves, which are not visible in SAR images and produce only unsystematic clutter, can be accurately estimated. The SSP includes XWAVE_C, a pre-filtering procedure for removing artefacts such as ships, seamarks, buoys, offshore constructions and slicks, and an additional procedure performing a check of results based on the statistics of the whole scene. The SSP allows an automatic processing of TS-X images with an error RMSE = 25 cm and Scatter Index SI = 20% for total significant wave height HS from sequences of TS-X StripMap images with a coverage of ∼30 km × 300 km across the German Bight. The SSP was tuned spatially with model data of the German Weather Service's (DWD) CWAM (Coastal WAve Model) with 900 m horizontal resolution and tuned in situ with 6 buoys located in DWD model domain in the German Bight. The collected, processed and analyzed data base for the German Bight consists of more than 60 TS-X StripMap scenes/overflights with more than 200 images since 2013 with sea state acquired in the domain HS = 0-7 m with a mean value of 1.25 m over all available scenes at buoy locations. The paper addresses the development and implementation of XWAVE_C, and presents the possibilities of SSP delivering sea state fields by reproducing local HS variations connected with local wind gusts, variable bathymetry and moving wind fronts under different weather conditions.

  5. A Smart Toy to Enhance the Decision-Making Process at Children's Psychomotor Delay Screenings: A Pilot Study.

    PubMed

    Gutiérrez García, María Angeles; Martín Ruiz, María Luisa; Rivera, Diego; Vadillo, Laura; Valero Duboy, Miguel Angel

    2017-05-19

    EDUCERE ("Ubiquitous Detection Ecosystem to Care and Early Stimulation for Children with Developmental Disorders") is an ecosystem for ubiquitous detection, care, and early stimulation of children with developmental disorders. The objectives of this Spanish government-funded research and development project are to investigate, develop, and evaluate innovative solutions to detect changes in psychomotor development through the natural interaction of children with toys and everyday objects, and perform stimulation and early attention activities in real environments such as home and school. Thirty multidisciplinary professionals and three nursery schools worked in the EDUCERE project between 2014 and 2017 and they obtained satisfactory results. Related to EDUCERE, we found studies based on providing networks of connected smart objects and the interaction between toys and social networks. This research includes the design, implementation, and validation of an EDUCERE smart toy aimed to automatically detect delays in psychomotor development. The results from initial tests led to enhancing the effectiveness of the original design and deployment. The smart toy, based on stackable cubes, has a data collector module and a smart system for detection of developmental delays, called the EDUCERE developmental delay screening system (DDSS). The pilot study involved 65 toddlers aged between 23 and 37 months (mean=29.02, SD 3.81) who built a tower with five stackable cubes, designed by following the EDUCERE smart toy model. As toddlers made the tower, sensors in the cubes sent data to a collector module through a wireless connection. All trials were video-recorded for further analysis by child development experts. After watching the videos, experts scored the performance of the trials to compare and fine-tune the interpretation of the data automatically gathered by the toy-embedded sensors. Judges were highly reliable in an interrater agreement analysis (intraclass correlation 0.961, 95% CI 0.937-0.967), suggesting that the process was successful to separate different levels of performance. A factor analysis of collected data showed that three factors, trembling, speed, and accuracy, accounted for 76.79% of the total variance, but only two of them were predictors of performance in a regression analysis: accuracy (P=.001) and speed (P=.002). The other factor, trembling (P=.79), did not have a significant effect on this dependent variable. The EDUCERE DDSS is ready to use the regression equation obtained for the dependent variable "performance" as an algorithm for the automatic detection of psychomotor developmental delays. The results of the factor analysis are valuable to simplify the design of the smart toy by taking into account only the significant variables in the collector module. The fine-tuning of the toy process module will be carried out by following the specifications resulting from the analysis of the data to improve the efficiency and effectiveness of the product. ©María Angeles Gutiérrez García, María Luisa Martín Ruiz, Diego Rivera, Laura Vadillo, Miguel Angel Valero Duboy. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.05.2017.

  6. Evolution of the ATLAS Nightly Build System

    NASA Astrophysics Data System (ADS)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  7. Simultenious binary hash and features learning for image retrieval

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.

    2016-05-01

    Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.

  8. Adaptive road crack detection system by pavement classification.

    PubMed

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.

  9. Adaptive Road Crack Detection System by Pavement Classification

    PubMed Central

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717

  10. The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)

    1997-01-01

    Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.

  11. Semantic priming of familiar songs.

    PubMed

    Johnson, Sarah K; Halpern, Andrea R

    2012-05-01

    We explored the functional organization of semantic memory for music by comparing priming across familiar songs both within modalities (Experiment 1, tune to tune; Experiment 3, category label to lyrics) and across modalities (Experiment 2, category label to tune; Experiment 4, tune to lyrics). Participants judged whether or not the target tune or lyrics were real (akin to lexical decision tasks). We found significant priming, analogous to linguistic associative-priming effects, in reaction times for related primes as compared to unrelated primes, but primarily for within-modality comparisons. Reaction times to tunes (e.g., "Silent Night") were faster following related tunes ("Deck the Hall") than following unrelated tunes ("God Bless America"). However, a category label (e.g., Christmas) did not prime tunes from within that category. Lyrics were primed by a related category label, but not by a related tune. These results support the conceptual organization of music in semantic memory, but with potentially weaker associations across modalities.

  12. Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing

    2014-01-01

    We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451

  13. Individual and Joint Expert Judgments as Reference Standards in Artifact Detection

    PubMed Central

    Verduijn, Marion; Peek, Niels; de Keizer, Nicolette F.; van Lieshout, Erik-Jan; de Pont, Anne-Cornelie J.M.; Schultz, Marcus J.; de Jonge, Evert; de Mol, Bas A.J.M.

    2008-01-01

    Objective To investigate the agreement among clinical experts in their judgments of monitoring data with respect to artifacts, and to examine the effect of reference standards that consist of individual and joint expert judgments on the performance of artifact filters. Design Individual judgments of four physicians, a majority vote judgment, and a consensus judgment were obtained for 30 time series of three monitoring variables: mean arterial blood pressure (ABPm), central venous pressure (CVP), and heart rate (HR). The individual and joint judgments were used to tune three existing automated filtering methods and to evaluate the performance of the resulting filters. Measurements The interrater agreement was calculated in terms of positive specific agreement (PSA). The performance of the artifact filters was quantified in terms of sensitivity and positive predictive value (PPV). Results PSA values between 0.33 and 0.85 were observed among clinical experts in their selection of artifacts, with relatively high values for CVP data. Artifact filters developed using judgments of individual experts were found to moderately generalize to new time series and other experts; sensitivity values ranged from 0.40 to 0.60 for ABPm and HR filters (PPV: 0.57–0.84), and from 0.63 to 0.80 for CVP filters (PPV: 0.71–0.86). A higher performance value for the filters was found for the three variable types when joint judgments were used for tuning the filtering methods. Conclusion Given the disagreement among experts in their individual judgment of monitoring data with respect to artifacts, the use of joint reference standards obtained from multiple experts is recommended for development of automatic artifact filters. PMID:18096912

  14. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  15. PID-controller with predictor and auto-tuning algorithm: study of efficiency for thermal plants

    NASA Astrophysics Data System (ADS)

    Kuzishchin, V. F.; Merzlikina, E. I.; Hoang, Van Va

    2017-09-01

    The problem of efficiency estimation of an automatic control system (ACS) with a Smith predictor and PID-algorithm for thermal plants is considered. In order to use the predictor, it is proposed to include an auto-tuning module (ATC) into the controller; the module calculates parameters for a second-order plant module with a time delay. The study was conducted using programmable logical controllers (PLC), one of which performed control, ATC, and predictor functions. A simulation model was used as a control plant, and there were two variants of the model: one of them was built on the basis of a separate PLC, and the other was a physical model of a thermal plant in the form of an electrical heater. Analysis of the efficiency of the ACS with the predictor was carried out for several variants of the second order plant model with time delay, and the analysis was performed on the basis of the comparison of transient processes in the system when the set point was changed and when a disturbance influenced the control plant. The recommendations are given on correction of the PID-algorithm parameters when the predictor is used by means of using the correcting coefficient k for the PID parameters. It is shown that, when the set point is changed, the use of the predictor is effective taking into account the parameters correction with k = 2. When the disturbances influence the plant, the use of the predictor is doubtful, because the transient process is too long. The reason for this is that, in the neighborhood of the zero frequency, the amplitude-frequency characteristic (AFC) of the system with the predictor has an ascent in comparison with the AFC of the system without the predictor.

  16. Transdimensional Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Bodin, T.; Sambridge, M.

    2009-12-01

    In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.

  17. An automated approach towards detecting complex behaviours in deep brain oscillations.

    PubMed

    Mace, Michael; Yousif, Nada; Naushahi, Mohammad; Abdullah-Al-Mamun, Khondaker; Wang, Shouyan; Nandi, Dipankar; Vaidyanathan, Ravi

    2014-03-15

    Extracting event-related potentials (ERPs) from neurological rhythms is of fundamental importance in neuroscience research. Standard ERP techniques typically require the associated ERP waveform to have low variance, be shape and latency invariant and require many repeated trials. Additionally, the non-ERP part of the signal needs to be sampled from an uncorrelated Gaussian process. This limits methods of analysis to quantifying simple behaviours and movements only when multi-trial data-sets are available. We introduce a method for automatically detecting events associated with complex or large-scale behaviours, where the ERP need not conform to the aforementioned requirements. The algorithm is based on the calculation of a detection contour and adaptive threshold. These are combined using logical operations to produce a binary signal indicating the presence (or absence) of an event with the associated detection parameters tuned using a multi-objective genetic algorithm. To validate the proposed methodology, deep brain signals were recorded from implanted electrodes in patients with Parkinson's disease as they participated in a large movement-based behavioural paradigm. The experiment involved bilateral recordings of local field potentials from the sub-thalamic nucleus (STN) and pedunculopontine nucleus (PPN) during an orientation task. After tuning, the algorithm is able to extract events achieving training set sensitivities and specificities of [87.5 ± 6.5, 76.7 ± 12.8, 90.0 ± 4.1] and [92.6 ± 6.3, 86.0 ± 9.0, 29.8 ± 12.3] (mean ± 1 std) for the three subjects, averaged across the four neural sites. Furthermore, the methodology has the potential for utility in real-time applications as only a single-trial ERP is required. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Design and implementation of adaptive PI control schemes for web tension control in roll-to-roll (R2R) manufacturing.

    PubMed

    Raul, Pramod R; Pagilla, Prabhakar R

    2015-05-01

    In this paper, two adaptive Proportional-Integral (PI) control schemes are designed and discussed for control of web tension in Roll-to-Roll (R2R) manufacturing systems. R2R systems are used to transport continuous materials (called webs) on rollers from the unwind roll to the rewind roll. Maintaining web tension at the desired value is critical to many R2R processes such as printing, coating, lamination, etc. Existing fixed gain PI tension control schemes currently used in industrial practice require extensive tuning and do not provide the desired performance for changing operating conditions and material properties. The first adaptive PI scheme utilizes the model reference approach where the controller gains are estimated based on matching of the actual closed-loop tension control systems with an appropriately chosen reference model. The second adaptive PI scheme utilizes the indirect adaptive control approach together with relay feedback technique to automatically initialize the adaptive PI gains. These adaptive tension control schemes can be implemented on any R2R manufacturing system. The key features of the two adaptive schemes is that their designs are simple for practicing engineers, easy to implement in real-time, and automate the tuning process. Extensive experiments are conducted on a large experimental R2R machine which mimics many features of an industrial R2R machine. These experiments include trials with two different polymer webs and a variety of operating conditions. Implementation guidelines are provided for both adaptive schemes. Experimental results comparing the two adaptive schemes and a fixed gain PI tension control scheme used in industrial practice are provided and discussed. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Automatic machine-learning based identification of jogging periods from accelerometer measurements of adolescents under field conditions.

    PubMed

    Zdravevski, Eftim; Risteska Stojkoska, Biljana; Standl, Marie; Schulz, Holger

    2017-01-01

    Assessment of health benefits associated with physical activity depend on the activity duration, intensity and frequency, therefore their correct identification is very valuable and important in epidemiological and clinical studies. The aims of this study are: to develop an algorithm for automatic identification of intended jogging periods; and to assess whether the identification performance is improved when using two accelerometers at the hip and ankle, compared to when using only one at either position. The study used diarized jogging periods and the corresponding accelerometer data from thirty-nine, 15-year-old adolescents, collected under field conditions, as part of the GINIplus study. The data was obtained from two accelerometers placed at the hip and ankle. Automated feature engineering technique was performed to extract features from the raw accelerometer readings and to select a subset of the most significant features. Four machine learning algorithms were used for classification: Logistic regression, Support Vector Machines, Random Forest and Extremely Randomized Trees. Classification was performed using only data from the hip accelerometer, using only data from ankle accelerometer and using data from both accelerometers. The reported jogging periods were verified by visual inspection and used as golden standard. After the feature selection and tuning of the classification algorithms, all options provided a classification accuracy of at least 0.99, independent of the applied segmentation strategy with sliding windows of either 60s or 180s. The best matching ratio, i.e. the length of correctly identified jogging periods related to the total time including the missed ones, was up to 0.875. It could be additionally improved up to 0.967 by application of post-classification rules, which considered the duration of breaks and jogging periods. There was no obvious benefit of using two accelerometers, rather almost the same performance could be achieved from either accelerometer position. Machine learning techniques can be used for automatic activity recognition, as they provide very accurate activity recognition, significantly more accurate than when keeping a diary. Identification of jogging periods in adolescents can be performed using only one accelerometer. Performance-wise there is no significant benefit from using accelerometers on both locations.

  20. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanam, A; Min, Y; Beron, P

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less

  1. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-01

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.

  2. Determinants of Intention to Use Mobile Phone Caller Tunes to Promote Voluntary Blood Donation: Cross-Sectional Study

    PubMed Central

    Burdine, James N; Aftab, Ammar; Asamoah-Akuoko, Lucy; Anum, David A; Kretchy, Irene A; Samman, Elfreda W; Appiah, Patience B; Bates, Imelda

    2018-01-01

    Background Voluntary blood donation rates are low in sub-Saharan Africa. Sociobehavioral factors such as a belief that donated blood would be used for performing rituals deter people from donating blood. There is a need for culturally appropriate communication interventions to encourage individuals to donate blood. Health care interventions that use mobile phones have increased in developing countries, although many of them focus on SMS text messaging (short message service, SMS). A unique feature of mobile phones that has so far not been used for aiding blood donation is caller tunes. Caller tunes replace the ringing sound heard by a caller to a mobile phone before the called party answers the call. In African countries such as Ghana, instead of the typical ringing sound, a caller may hear a message or song. Despite the popularity of such caller tunes, there is a lack of empirical studies on their potential use for promoting blood donation. Objective The aim of this study was to use the technology acceptance model to explore the influence of the factors—perceived ease of use, perceived usefulness, attitude, and free of cost—on intentions of blood or nonblood donors to download blood donation-themed caller tunes to promote blood donation, if available. Methods A total of 478 blood donors and 477 nonblood donors were purposively sampled for an interviewer-administered questionnaire survey at blood donation sites in Accra, Ghana. Data were analyzed using descriptive statistics, exploratory factor analysis, and confirmatory factory analysis or structural equation modeling, leading to hypothesis testing to examine factors that determine intention to use caller tunes for blood donation among blood or nonblood donors who use or do not use mobile phone caller tunes. Results Perceived usefulness had a significant effect on intention to use caller tunes among blood donors with caller tunes (beta=.293, P<.001), blood donors without caller tunes (beta=.165, P=.02, nonblood donors with caller tunes (beta=.278, P<.001), and nonblood donors without caller tunes (beta=.164, P=.01). Attitudes had significant effect on intention to use caller tunes among blood donors without caller tunes (beta=.351, P<.001), nonblood donors with caller tunes (beta=.384, P<.001), nonblood donors without caller tunes (beta=.539, P<.001) but not among blood donors with caller tunes (beta=.056, P=.44). The effect of free-of-cost caller tunes on the intention to use for blood donation was statistically significant (beta=.169, P<.001) only in the case of nonblood donors without caller tunes, whereas this path was statistically not significant in other models. Conclusions Our results provide empirical evidence for designing caller tunes to promote blood donation in Ghana. The study found that making caller tunes free is particularly relevant for nonblood donors with no caller tunes. PMID:29728343

  3. Determinants of Intention to Use Mobile Phone Caller Tunes to Promote Voluntary Blood Donation: Cross-Sectional Study.

    PubMed

    Appiah, Bernard; Burdine, James N; Aftab, Ammar; Asamoah-Akuoko, Lucy; Anum, David A; Kretchy, Irene A; Samman, Elfreda W; Appiah, Patience B; Bates, Imelda

    2018-05-04

    Voluntary blood donation rates are low in sub-Saharan Africa. Sociobehavioral factors such as a belief that donated blood would be used for performing rituals deter people from donating blood. There is a need for culturally appropriate communication interventions to encourage individuals to donate blood. Health care interventions that use mobile phones have increased in developing countries, although many of them focus on SMS text messaging (short message service, SMS). A unique feature of mobile phones that has so far not been used for aiding blood donation is caller tunes. Caller tunes replace the ringing sound heard by a caller to a mobile phone before the called party answers the call. In African countries such as Ghana, instead of the typical ringing sound, a caller may hear a message or song. Despite the popularity of such caller tunes, there is a lack of empirical studies on their potential use for promoting blood donation. The aim of this study was to use the technology acceptance model to explore the influence of the factors-perceived ease of use, perceived usefulness, attitude, and free of cost-on intentions of blood or nonblood donors to download blood donation-themed caller tunes to promote blood donation, if available. A total of 478 blood donors and 477 nonblood donors were purposively sampled for an interviewer-administered questionnaire survey at blood donation sites in Accra, Ghana. Data were analyzed using descriptive statistics, exploratory factor analysis, and confirmatory factory analysis or structural equation modeling, leading to hypothesis testing to examine factors that determine intention to use caller tunes for blood donation among blood or nonblood donors who use or do not use mobile phone caller tunes. Perceived usefulness had a significant effect on intention to use caller tunes among blood donors with caller tunes (beta=.293, P<.001), blood donors without caller tunes (beta=.165, P=.02, nonblood donors with caller tunes (beta=.278, P<.001), and nonblood donors without caller tunes (beta=.164, P=.01). Attitudes had significant effect on intention to use caller tunes among blood donors without caller tunes (beta=.351, P<.001), nonblood donors with caller tunes (beta=.384, P<.001), nonblood donors without caller tunes (beta=.539, P<.001) but not among blood donors with caller tunes (beta=.056, P=.44). The effect of free-of-cost caller tunes on the intention to use for blood donation was statistically significant (beta=.169, P<.001) only in the case of nonblood donors without caller tunes, whereas this path was statistically not significant in other models. Our results provide empirical evidence for designing caller tunes to promote blood donation in Ghana. The study found that making caller tunes free is particularly relevant for nonblood donors with no caller tunes. ©Bernard Appiah, James N Burdine, Ammar Aftab, Lucy Asamoah-Akuoko, David A Anum, Irene A Kretchy, Elfreda W Samman, Patience B Appiah, Imelda Bates. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.05.2018.

  4. Multistation alarm system for eruptive activity based on the automatic classification of volcanic tremor: specifications and performance

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Falsaperla, Susanna; Messina, Alfio; Spampinato, Salvatore

    2015-04-01

    With over fifty eruptive episodes (Strombolian activity, lava fountains, and lava flows) between 2006 and 2013, Mt Etna, Italy, underscored its role as the most active volcano in Europe. Seven paroxysmal lava fountains at the South East Crater occurred in 2007-2008 and 46 at the New South East Crater between 2011 and 2013. Month-lasting lava emissions affected the upper eastern flank of the volcano in 2006 and 2008-2009. On this background, effective monitoring and forecast of volcanic phenomena are a first order issue for their potential socio-economic impact in a densely populated region like the town of Catania and its surroundings. For example, explosive activity has often formed thick ash clouds with widespread tephra fall able to disrupt the air traffic, as well as to cause severe problems at infrastructures, such as highways and roads. For timely information on changes in the state of the volcano and possible onset of dangerous eruptive phenomena, the analysis of the continuous background seismic signal, the so-called volcanic tremor, turned out of paramount importance. Changes in the state of the volcano as well as in its eruptive style are usually concurrent with variations of the spectral characteristics (amplitude and frequency content) of tremor. The huge amount of digital data continuously acquired by INGV's broadband seismic stations every day makes a manual analysis difficult, and techniques of automatic classification of the tremor signal are therefore applied. The application of unsupervised classification techniques to the tremor data revealed significant changes well before the onset of the eruptive episodes. This evidence led to the development of specific software packages related to real-time processing of the tremor data. The operational characteristics of these tools - fail-safe, robustness with respect to noise and data outages, as well as computational efficiency - allowed the identification of criteria for automatic alarm flagging. The system is hitherto one of the main automatic alerting tools to identify impending eruptive events at Etna. The currently operating software named KKAnalysis is applied to the data stream continuously recorded at two seismic stations. The data are merged with reference datasets of past eruptive episodes. In doing so, the results of pattern classification can be immediately compared to previous eruptive scenarios. Given the rich material collected in recent years, here we propose the application of the alert system to a wider range (up to a total of eleven) stations at different elevations (1200-3050 m) and distances (1-8 km) from the summit craters. Critical alert parameters were empirically defined to obtain an optimal tuning of the alert system for each station. To verify the robustness of this new, multistation alert system, a dataset encompassing about eight years of continuous seismic records (since 2006) was processed automatically using KKAnalysis and collateral software offline. Then, we analyzed the performance of the classifier in terms of timing and spatial distribution of the stations.

  5. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT VII, ENGINE TUNE-UP--DETROIT DIESEL ENGINE.

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF TUNE-UP PROCEDURES FOR DIESEL ENGINES. TOPICS ARE SCHEDULING TUNE-UPS, AND TUNE-UP PROCEDURES. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL BRANCH PROGRAMED TRAINING FILM "ENGINE TUNE-UP--DETROIT DIESEL ENGINE" AND OTHER MATERIALS. SEE VT 005 655 FOR FURTHER INFORMATION.…

  6. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    PubMed

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  7. Auto-tuning for NMR probe using LabVIEW

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Pham, Stephanie; Bernal, Oscar

    2014-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program uses a simplified model of the NMR probe conditions near perfect tuning to mimic the tuning process and predict the position of the capacitor shafts needed to achieve the desirable impedance. The tuning capacitors of the probe are controlled by stepper motors through a LabVIEW/computer interface. Our program calculates the effective capacitance needed to tune the probe and provides controlling parameters to advance the motors in the right direction. The impedance reading of a network analyzer can be used to correct the model parameters in real time for feedback control.

  8. Fine-grained leukocyte classification with deep residual learning for microscopic images.

    PubMed

    Qin, Feiwei; Gao, Nannan; Peng, Yong; Wu, Zizhao; Shen, Shuying; Grudtsin, Artur

    2018-08-01

    Leukocyte classification and cytometry have wide applications in medical domain, previous researches usually exploit machine learning techniques to classify leukocytes automatically. However, constrained by the past development of machine learning techniques, for example, extracting distinctive features from raw microscopic images are difficult, the widely used SVM classifier only has relative few parameters to tune, these methods cannot efficiently handle fine-grained classification cases when the white blood cells have up to 40 categories. Based on deep learning theory, a systematic study is conducted on finer leukocyte classification in this paper. A deep residual neural network based leukocyte classifier is constructed at first, which can imitate the domain expert's cell recognition process, and extract salient features robustly and automatically. Then the deep neural network classifier's topology is adjusted according to the prior knowledge of white blood cell test. After that the microscopic image dataset with almost one hundred thousand labeled leukocytes belonging to 40 categories is built, and combined training strategies are adopted to make the designed classifier has good generalization ability. The proposed deep residual neural network based classifier was tested on microscopic image dataset with 40 leukocyte categories. It achieves top-1 accuracy of 77.80%, top-5 accuracy of 98.75% during the training procedure. The average accuracy on the test set is nearly 76.84%. This paper presents a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Experimental results validate the feasibility and effectiveness of our approach. Extended experiments support that the fine-grained leukocyte classifier could be used in real medical applications, assist doctors in diagnosing diseases, reduce human power significantly. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Wave Power Demonstration Project at Reedsport, Oregon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mekhiche, Mike; Downie, Bruce

    2013-10-21

    Ocean wave power can be a significant source of large‐scale, renewable energy for the US electrical grid. The Electrical Power Research Institute (EPRI) conservatively estimated that 20% of all US electricity could be generated by wave energy. Ocean Power Technologies, Inc. (OPT), with funding from private sources and the US Navy, developed the PowerBuoy to generate renewable energy from the readily available power in ocean waves. OPT's PowerBuoy converts the energy in ocean waves to electricity using the rise and fall of waves to move the buoy up and down (mechanical stroking) which drives an electric generator. This electricity ismore » then conditioned and transmitted ashore as high‐voltage power via underwater cable. OPT's wave power generation system includes sophisticated techniques to automatically tune the system for efficient conversion of random wave energy into low cost green electricity, for disconnecting the system in large waves for hardware safety and protection, and for automatically restoring operation when wave conditions normalize. As the first utility scale wave power project in the US, the Wave Power Demonstration Project at Reedsport, OR, will consist of 10 PowerBuoys located 2.5 miles off the coast. This U.S. Department of Energy Grant funding along with funding from PNGC Power, an Oregon‐based electric power cooperative, was utilized for the design completion, fabrication, assembly and factory testing of the first PowerBuoy for the Reedsport project. At this time, the design and fabrication of this first PowerBuoy and factory testing of the power take‐off subsystem are complete; additionally the power take‐off subsystem has been successfully integrated into the spar.« less

  10. Estimating ice albedo from fine debris cover quantified by a semi-automatic method: the case study of Forni Glacier, Italian Alps

    NASA Astrophysics Data System (ADS)

    Azzoni, Roberto Sergio; Senese, Antonella; Zerboni, Andrea; Maugeri, Maurizio; Smiraglia, Claudio; Diolaiuti, Guglielmina Adele

    2016-03-01

    In spite of the quite abundant literature focusing on fine debris deposition over glacier accumulation areas, less attention has been paid to the glacier melting surface. Accordingly, we proposed a novel method based on semi-automatic image analysis to estimate ice albedo from fine debris coverage (d). Our procedure was tested on the surface of a wide Alpine valley glacier (the Forni Glacier, Italy), in summer 2011, 2012 and 2013, acquiring parallel data sets of in situ measurements of ice albedo and high-resolution surface images. Analysis of 51 images yielded d values ranging from 0.01 to 0.63 and albedo was found to vary from 0.06 to 0.32. The estimated d values are in a linear relation with the natural logarithm of measured ice albedo (R = -0.84). The robustness of our approach in evaluating d was analyzed through five sensitivity tests, and we found that it is largely replicable. On the Forni Glacier, we also quantified a mean debris coverage rate (Cr) equal to 6 g m-2 per day during the ablation season of 2013, thus supporting previous studies that describe ongoing darkening phenomena at Alpine debris-free glaciers surface. In addition to debris coverage, we also considered the impact of water (both from melt and rainfall) as a factor that tunes albedo: meltwater occurs during the central hours of the day, decreasing the albedo due to its lower reflectivity; instead, rainfall causes a subsequent mean daily albedo increase slightly higher than 20 %, although it is short-lasting (from 1 to 4 days).

  11. Automated clustering-based workload characterization

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Menasce, Daniel A.; Yesha, Yelena

    1996-01-01

    The demands placed on the mass storage systems at various federal agencies and national laboratories are continuously increasing in intensity. This forces system managers to constantly monitor the system, evaluate the demand placed on it, and tune it appropriately using either heuristics based on experience or analytic models. Performance models require an accurate workload characterization. This can be a laborious and time consuming process. It became evident from our experience that a tool is necessary to automate the workload characterization process. This paper presents the design and discusses the implementation of a tool for workload characterization of mass storage systems. The main features of the tool discussed here are: (1)Automatic support for peak-period determination. Histograms of system activity are generated and presented to the user for peak-period determination; (2) Automatic clustering analysis. The data collected from the mass storage system logs is clustered using clustering algorithms and tightness measures to limit the number of generated clusters; (3) Reporting of varied file statistics. The tool computes several statistics on file sizes such as average, standard deviation, minimum, maximum, frequency, as well as average transfer time. These statistics are given on a per cluster basis; (4) Portability. The tool can easily be used to characterize the workload in mass storage systems of different vendors. The user needs to specify through a simple log description language how the a specific log should be interpreted. The rest of this paper is organized as follows. Section two presents basic concepts in workload characterization as they apply to mass storage systems. Section three describes clustering algorithms and tightness measures. The following section presents the architecture of the tool. Section five presents some results of workload characterization using the tool.Finally, section six presents some concluding remarks.

  12. Automatic exposure control in CT: the effect of patient size, anatomical region and prescribed modulation strength on tube current and image quality.

    PubMed

    Papadakis, Antonios E; Perisinakis, Kostas; Damilakis, John

    2014-10-01

    To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAsmod) values were recorded. Image noise was measured at selected anatomical sites. The mAsmod recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P < 0.05). The mAsmod was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAsmod ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. • ATCM efficiency is related to the size of the patient's body. • ATCM should be activated without caution in overweight adult individuals. • ATCM may increase radiation dose in children older than 5 years old. • ATCM efficiency depends on the protocol selected for a specific anatomical region. • Modulation strength may be appropriately tuned to enhance ATCM efficiency.

  13. Graph Theory-Based Brain Connectivity for Automatic Classification of Multiple Sclerosis Clinical Courses.

    PubMed

    Kocevar, Gabriel; Stamile, Claudio; Hannoun, Salem; Cotton, François; Vukusic, Sandra; Durand-Dubief, Françoise; Sappey-Marinier, Dominique

    2016-01-01

    Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS) patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles. Materials and Methods: Sixty-four MS patients [12 Clinical Isolated Syndrome (CIS), 24 Relapsing Remitting (RR), 24 Secondary Progressive (SP), and 17 Primary Progressive (PP)] along with 26 healthy controls (HC) underwent MR examination. T1 and diffusion tensor imaging (DTI) were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects' groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM) combined with Radial Basic Function (RBF) kernel. Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity, and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F -Measures (91.8, 91.8, 75.6, and 70.6%) were obtained for binary (HC-CIS, CIS-RR, RR-PP) and multi-class (CIS-RR-SP) classification tasks, respectively. When using only one graph metric, the best F -Measures (83.6, 88.9, and 70.7%) were achieved for modularity with previous binary classification tasks. Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients' clinical profiles.

  14. An Observation Knowledgebase for Hinode Data

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal E.; Freeland, S.; Green, S.; Schiff, D.; Seguin, R.; Slater, G.; Cirtain, J.

    2007-05-01

    We have developed a standards-based system for the Solar Optical and X Ray Telescopes on the Hinode orbiting solar observatory which can serve as part of a developing Heliophysics informatics system. Our goal is to make the scientific data acquired by Hinode more accessible and useful to scientists by allowing them to do reasoning and flexible searches on observation metadata and to ask higher-level questions of the system than previously allowed. The Hinode Observation Knowledgebase relates the intentions and goals of the observation planners (as-planned metadata) with actual observational data (as-run metadata), along with connections to related models, data products and identified features (follow-up metadata) through a citation system. Summaries of the data (both as image thumbnails and short "film strips") serve to guide researchers to the observations appropriate for their research, and these are linked directly to the data catalog for easy extraction and delivery. The semantic information of the observation (Field of view, wavelength, type of observable, average cadence etc.) is captured through simple user interfaces and encoded using the VOEvent XML standard (with the addition of some solar-related extensions). These interfaces merge metadata acquired automatically during both mission planning and an data analysis (see Seguin et. al. 2007 at this meeting) phases with that obtained directly from the planner/analyst and send them to be incorporated into the knowledgebase. The resulting information is automatically rendered into standard categories based on planned and recent observations, as well as by popularity and recommendations by the science team. They are also directly searchable through both and web-based searches and direct calls to the API. Observations details can also be rendered as RSS, iTunes and Google Earth interfaces. The resulting system provides a useful tool to researchers and can act as a demonstration for larger, more complex systems.

  15. Automatization Project for the Carl-Zeiss-Jena Coudè Telescope of the Simón Bolívar Planetarium I. The Electro-Mechanic System

    NASA Astrophysics Data System (ADS)

    Núñez, A.; Maharaj, A.; Muñoz, A. G.

    2009-05-01

    The ``Complejo Científico, Cultural y Turístico Simón Bolívar'' (CCCTSB), located in Maracaibo, Venezuela, lodges the Simón Bolívar Planetarium and an 150 mm aperture, 2250 mm focal length Carl-Zeiss-Jena Coudè refractor telescope. In this work we discuss the schematics for the automatization project of this Telescope, the planned improvements, methodology, engines, micro-controllers, interfaces and the uptodate status of the project. This project is working on the first two levels of the automation pyramid, the sensor -- actuator level and the control or Plant floor level. The Process control level correspond to the software related section. This mean that this project work immediately with the electrical, electronic and mechanical stuffs, and with the assembler micro controller language. All the pc related stuff, like GUI (Graphic user interfaces), remote control, Grid database, and others, correspond to the next two automation pyramid levels. The idea is that little human intervention will be required to manipulate the telescope, only giving a pair of coordinates to ubicate and follow an object on the sky. A set of three servomotors, coupling it with the telescope with a gear box, are going to manipulate right ascension, declination and focus movement. For the dome rotation, a three phase induction motor will be used. For dome aperture/closure it is suggested a DC motor powered with solar panels. All those actuators are controlled by a 8 bits micro-controller, which receive the coordinate imput, the signal from the position sensors and have the PID control algorithm. This algorithm is tuned based on the mathematical model of the telescope electro-mechanical instrumentation.

  16. Development and Training of a Neural Controller for Hind Leg Walking in a Dog Robot

    PubMed Central

    Hunt, Alexander; Szczecinski, Nicholas; Quinn, Roger

    2017-01-01

    Animals dynamically adapt to varying terrain and small perturbations with remarkable ease. These adaptations arise from complex interactions between the environment and biomechanical and neural components of the animal's body and nervous system. Research into mammalian locomotion has resulted in several neural and neuro-mechanical models, some of which have been tested in simulation, but few “synthetic nervous systems” have been implemented in physical hardware models of animal systems. One reason is that the implementation into a physical system is not straightforward. For example, it is difficult to make robotic actuators and sensors that model those in the animal. Therefore, even if the sensorimotor circuits were known in great detail, those parameters would not be applicable and new parameter values must be found for the network in the robotic model of the animal. This manuscript demonstrates an automatic method for setting parameter values in a synthetic nervous system composed of non-spiking leaky integrator neuron models. This method works by first using a model of the system to determine required motor neuron activations to produce stable walking. Parameters in the neural system are then tuned systematically such that it produces similar activations to the desired pattern determined using expected sensory feedback. We demonstrate that the developed method successfully produces adaptive locomotion in the rear legs of a dog-like robot actuated by artificial muscles. Furthermore, the results support the validity of current models of mammalian locomotion. This research will serve as a basis for testing more complex locomotion controllers and for testing specific sensory pathways and biomechanical designs. Additionally, the developed method can be used to automatically adapt the neural controller for different mechanical designs such that it could be used to control different robotic systems. PMID:28420977

  17. Stress-tuned conductor-polymer composite for use in sensors

    DOEpatents

    Martin, James E; Read, Douglas H

    2013-10-22

    A method for making a composite polymeric material with electrical conductivity determined by stress-tuning of the conductor-polymer composite, and sensors made with the stress-tuned conductor-polymer composite made by this method. Stress tuning is achieved by mixing a miscible liquid into the polymer precursor solution or by absorbing into the precursor solution a soluble compound from vapor in contact with the polymer precursor solution. The conductor may or may not be ordered by application of a magnetic field. The composite is formed by polymerization with the stress-tuning agent in the polymer matrix. The stress-tuning agent is removed following polymerization to produce a conductor-polymer composite with a stress field that depends on the amount of stress-tuning agent employed.

  18. Scaling analysis of the non-Abelian quasiparticle tunneling in [Formula: see text] FQH states.

    PubMed

    Li, Qi; Jiang, Na; Wan, Xin; Hu, Zi-Xiang

    2018-06-27

    Quasiparticle tunneling between two counter propagating edges through point contacts could provide information on its statistics. Previous study of the short distance tunneling displays a scaling behavior, especially in the conformal limit with zero tunneling distance. The scaling exponents for the non-Abelian quasiparticle tunneling exhibit some non-trivial behaviors. In this work, we revisit the quasiparticle tunneling amplitudes and their scaling behavior in a full range of the tunneling distance by putting the electrons on the surface of a cylinder. The edge-edge distance can be smoothly tuned by varying the aspect ratio for a finite size cylinder. We analyze the scaling behavior of the quasiparticles for the Read-Rezayi [Formula: see text] states for [Formula: see text] and 4 both in the short and long tunneling distance region. The finite size scaling analysis automatically gives us a critical length scale where the anomalous correction appears. We demonstrate this length scale is related to the size of the quasiparticle at which the backscattering between two counter propagating edges starts to be significant.

  19. Human-in-the-loop Bayesian optimization of wearable device parameters

    PubMed Central

    Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott

    2017-01-01

    The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613

  20. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  1. Prefrontal Cortex and Impulsive Decision Making

    PubMed Central

    Kim, Soyoun; Lee, Daeyeol

    2010-01-01

    Impulsivity refers to a set of heterogeneous behaviors that are tuned suboptimally along certain temporal dimensions. Impulsive inter-temporal choice refers to the tendency to forego a large but delayed reward and to seek an inferior but more immediate reward, whereas impulsive motor responses also result when the subjects fail to suppress inappropriate automatic behaviors. In addition, impulsive actions can be produced when too much emphasis is placed on speed rather than accuracy in a wide range of behaviors, including perceptual decision making. Despite this heterogeneous nature, the prefrontal cortex and its connected areas, such as the basal ganglia, play an important role in gating impulsive actions in a variety of behavioral tasks. Here, we describe key features of computations necessary for optimal decision making, and how their failures can lead to impulsive behaviors. We also review the recent findings from neuroimaging and single-neuron recording studies on the neural mechanisms related to impulsive behaviors. Converging approaches in economics, psychology, and neuroscience provide a unique vista for better understanding the nature of behavioral impairments associated with impulsivity. PMID:20728878

  2. Feature-based attention: it is all bottom-up priming.

    PubMed

    Theeuwes, Jan

    2013-10-19

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.

  3. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  4. Closed-loop wavelength stabilization of an optical parametric oscillator as a front end of a high-power iodine laser chain.

    PubMed

    Kral, L

    2007-05-01

    We present a complex stabilization and control system for a commercially available optical parametric oscillator. The system is able to stabilize the oscillator's output wavelength at a narrow spectral line of atomic iodine with subpicometer precision, allowing utilization of this solid-state parametric oscillator as a front end of a high-power photodissociation laser chain formed by iodine gas amplifiers. In such setup, a precise wavelength matching between the front end and the amplifier chain is necessary due to extremely narrow spectral lines of the gaseous iodine (approximately 20 pm). The system is based on a personal computer, a heated iodine cell, and a few other low-cost components. It automatically identifies the proper peak within the iodine absorption spectrum, and then keeps the oscillator tuned to this peak with high precision and reliability. The use of the solid-state oscillator as the front end allows us to use the whole iodine laser system as a pump laser for the optical parametric chirped pulse amplification, as it enables precise time synchronization with a signal Ti:sapphire laser.

  5. Eavesdropping on elephants

    NASA Astrophysics Data System (ADS)

    Payne, Katy

    2004-05-01

    The Elephant Listening Project is creating an acoustic monitoring program for African forest elephants, an endangered species that lives in dense forests where visual censusing is impossible. In 2002, a 21/2-month continuous recording was made on an array of autonomous recording units (ARUs) surrounding a forest clearing in the Central African Republic. Each day between 10 and 160 forest elephants (Loxodonta cyclotis), the subjects of Andrea Turkalo's 13-year demographic study, were present on the clearing. Thousands of vocalizations were recorded, most of which contained infrasonic energy. The calls were located in space using software developed in the Bioacoustics Research Program. During daylight hours simultaneous video recordings were made. GPS time-synchronization of video cameras and the ARUs made it possible to identify the elephants responsible for many calls and to examine associated circumstances and behaviors. Recordings were also made on a second acoustic array, permitting a preliminary estimate of propagation and an indication of source level for selected elephant calls. Automatic detection of elephant calls is increasing the feasibility of analyzing long acoustic recordings, and paving the way for finer-tuned analyses, with an ultimate goal of describing forest elephants' acoustic repertoire.

  6. Adaptive Impact-Driven Detection of Silent Data Corruption for HPC Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    For exascale HPC applications, silent data corruption (SDC) is one of the most dangerous problems because there is no indication that there are errors during the execution. We propose an adaptive impact-driven method that can detect SDCs dynamically. The key contributions are threefold. (1) We carefully characterize 18 real-world HPC applications and discuss the runtime data features, as well as the impact of the SDCs on their execution results. (2) We propose an impact-driven detection model that does not blindly improve the prediction accuracy, but instead detects only influential SDCs to guarantee user-acceptable execution results. (3) Our solution can adaptmore » to dynamic prediction errors based on local runtime data and can automatically tune detection ranges for guaranteeing low false alarms. Experiments show that our detector can detect 80-99.99% of SDCs with a false alarm rate less that 1% of iterations for most cases. The memory cost and detection overhead are reduced to 15% and 6.3%, respectively, for a large majority of applications.« less

  7. Improving automatic peptide mass fingerprint protein identification by combining many peak sets.

    PubMed

    Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim

    2004-08-05

    An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.

  8. Corrugator activity confirms immediate negative affect in surprise

    PubMed Central

    Topolinski, Sascha; Strack, Fritz

    2015-01-01

    The emotion of surprise entails a complex of immediate responses, such as cognitive interruption, attention allocation to, and more systematic processing of the surprising stimulus. All these processes serve the ultimate function to increase processing depth and thus cognitively master the surprising stimulus. The present account introduces phasic negative affect as the underlying mechanism responsible for this switch in operating mode. Surprising stimuli are schema-discrepant and thus entail cognitive disfluency, which elicits immediate negative affect. This affect in turn works like a phasic cognitive tuning switching the current processing mode from more automatic and heuristic to more systematic and reflective processing. Directly testing the initial elicitation of negative affect by surprising events, the present experiment presented high and low surprising neutral trivia statements to N = 28 participants while assessing their spontaneous facial expressions via facial electromyography. High compared to low surprising trivia elicited higher corrugator activity, indicative of negative affect and mental effort, while leaving zygomaticus (positive affect) and frontalis (cultural surprise expression) activity unaffected. Future research shall investigate the mediating role of negative affect in eliciting surprise-related outcomes. PMID:25762956

  9. Corrugator activity confirms immediate negative affect in surprise.

    PubMed

    Topolinski, Sascha; Strack, Fritz

    2015-01-01

    The emotion of surprise entails a complex of immediate responses, such as cognitive interruption, attention allocation to, and more systematic processing of the surprising stimulus. All these processes serve the ultimate function to increase processing depth and thus cognitively master the surprising stimulus. The present account introduces phasic negative affect as the underlying mechanism responsible for this switch in operating mode. Surprising stimuli are schema-discrepant and thus entail cognitive disfluency, which elicits immediate negative affect. This affect in turn works like a phasic cognitive tuning switching the current processing mode from more automatic and heuristic to more systematic and reflective processing. Directly testing the initial elicitation of negative affect by surprising events, the present experiment presented high and low surprising neutral trivia statements to N = 28 participants while assessing their spontaneous facial expressions via facial electromyography. High compared to low surprising trivia elicited higher corrugator activity, indicative of negative affect and mental effort, while leaving zygomaticus (positive affect) and frontalis (cultural surprise expression) activity unaffected. Future research shall investigate the mediating role of negative affect in eliciting surprise-related outcomes.

  10. Stable adaptive PI control for permanent magnet synchronous motor drive based on improved JITL technique.

    PubMed

    Zheng, Shiqi; Tang, Xiaoqi; Song, Bao; Lu, Shaowu; Ye, Bosheng

    2013-07-01

    In this paper, a stable adaptive PI control strategy based on the improved just-in-time learning (IJITL) technique is proposed for permanent magnet synchronous motor (PMSM) drive. Firstly, the traditional JITL technique is improved. The new IJITL technique has less computational burden and is more suitable for online identification of the PMSM drive system which is highly real-time compared to traditional JITL. In this way, the PMSM drive system is identified by IJITL technique, which provides information to an adaptive PI controller. Secondly, the adaptive PI controller is designed in discrete time domain which is composed of a PI controller and a supervisory controller. The PI controller is capable of automatically online tuning the control gains based on the gradient descent method and the supervisory controller is developed to eliminate the effect of the approximation error introduced by the PI controller upon the system stability in the Lyapunov sense. Finally, experimental results on the PMSM drive system show accurate identification and favorable tracking performance. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller.

    PubMed

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; Del Toro Matamoros, Raúl M

    2016-05-12

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value.

  12. Speaker gender identification based on majority vote classifiers

    NASA Astrophysics Data System (ADS)

    Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri

    2017-03-01

    Speaker gender identification is considered among the most important tools in several multimedia applications namely in automatic speech recognition, interactive voice response systems and audio browsing systems. Gender identification systems performance is closely linked to the selected feature set and the employed classification model. Typical techniques are based on selecting the best performing classification method or searching optimum tuning of one classifier parameters through experimentation. In this paper, we consider a relevant and rich set of features involving pitch, MFCCs as well as other temporal and frequency-domain descriptors. Five classification models including decision tree, discriminant analysis, nave Bayes, support vector machine and k-nearest neighbor was experimented. The three best perming classifiers among the five ones will contribute by majority voting between their scores. Experimentations were performed on three different datasets spoken in three languages: English, German and Arabic in order to validate language independency of the proposed scheme. Results confirm that the presented system has reached a satisfying accuracy rate and promising classification performance thanks to the discriminating abilities and diversity of the used features combined with mid-level statistics.

  13. Statistical Mechanical Analysis of Online Learning with Weight Normalization in Single Layer Perceptron

    NASA Astrophysics Data System (ADS)

    Yoshida, Yuki; Karakida, Ryo; Okada, Masato; Amari, Shun-ichi

    2017-04-01

    Weight normalization, a newly proposed optimization method for neural networks by Salimans and Kingma (2016), decomposes the weight vector of a neural network into a radial length and a direction vector, and the decomposed parameters follow their steepest descent update. They reported that learning with the weight normalization achieves better converging speed in several tasks including image recognition and reinforcement learning than learning with the conventional parameterization. However, it remains theoretically uncovered how the weight normalization improves the converging speed. In this study, we applied a statistical mechanical technique to analyze on-line learning in single layer linear and nonlinear perceptrons with weight normalization. By deriving order parameters of the learning dynamics, we confirmed quantitatively that weight normalization realizes fast converging speed by automatically tuning the effective learning rate, regardless of the nonlinearity of the neural network. This property is realized when the initial value of the radial length is near the global minimum; therefore, our theory suggests that it is important to choose the initial value of the radial length appropriately when using weight normalization.

  14. Evolution of the Generic Lock System at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Bevins; Yves Roblin

    2003-10-13

    The Generic Lock system is a software framework that allows highly flexible feedback control of large distributed systems. It allows system operators to implement new feedback loops between arbitrary process variables quickly and with no disturbance to the underlying control system. Several different types of feedback loops are provided and more are being added. This paper describes the further evolution of the system since it was first presented at ICALEPCS 2001 and reports on two years of successful use in accelerator operations. The framework has been enhanced in several key ways. Multiple-input, multiple-output (MIMO) lock types have been added formore » accelerator orbit and energy stabilization. The general purpose Proportional-Integral-Derivative (PID) locks can now be tuned automatically. The generic lock server now makes use of the Proxy IOC (PIOC) developed at Jefferson Lab to allow the locks to be monitored from any EPICS Channel Access aware client. (Previously clients had to be Cdev aware.) The dependency on the Qt XML parser has been replaced with the freely available Xerces DOM parser from the Apache project.« less

  15. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    PubMed

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  16. The upside of noise: engineered dissipation as a resource in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2017-09-01

    Historically, noise in superconducting circuits has been considered an obstacle to be removed. A large fraction of the research effort in designing superconducting circuits has focused on noise reduction, with great success, as coherence times have increased by four orders of magnitude in the past two decades. However, noise and dissipation can never be fully eliminated, and further, a rapidly growing body of theoretical and experimental work has shown that carefully tuned noise, in the form of engineered dissipation, can be a profoundly useful tool in designing and operating quantum circuits. In this article, I review important applications of engineered dissipation, including state generation, state stabilization, and autonomous quantum error correction, where engineered dissipation can mitigate the effect of intrinsic noise, reducing logical error rates in quantum information processing. Further, I provide a pedagogical review of the basic noise processes in superconducting qubits (photon loss and phase noise), and argue that any dissipative mechanism which can correct photon loss errors is very likely to automatically suppress dephasing. I also discuss applications for quantum simulation, and possible future research directions.

  17. Coherent infrared radiation from the ALS generated via femtosecond laser modulation of the electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrd, J.M.; Hao, Z.; Martin, M.C.

    2004-07-01

    Interaction of an electron beam with a femtosecond laser pulse co-propagating through a wiggler at the ALS produces large modulation of the electron energies within a short {approx}100 fs slice of the electron bunch. Propagating around the storage ring, this bunch develops a longitudinal density perturbation due to the dispersion of electron trajectories. The length of the perturbation evolves with a distance from the wiggler but is much shorter than the electron bunch length. This perturbation causes the electron bunch to emit short pulses of temporally and spatially coherent infrared light which are automatically synchronized to the modulating laser. Themore » intensity and spectra of the infrared light were measured in two storage ring locations for a nominal ALS lattice and for an experimental lattice with the higher momentum compaction factor. The onset of instability stimulated by laser e-beam interaction had been discovered. The infrared signal is now routinely used as a sensitive monitor for a fine tuning of the laser beam alignment during data accumulation in the experiments with femtosecond x-ray pulses.« less

  18. Assessing paedophilia based on the haemodynamic brain response to face images.

    PubMed

    Ponseti, Jorge; Granert, Oliver; Van Eimeren, Thilo; Jansen, Olav; Wolff, Stephan; Beier, Klaus; Deuschl, Günther; Huchzermeier, Christian; Stirn, Aglaja; Bosinski, Hartmut; Roman Siebner, Hartwig

    2016-01-01

    Objective assessment of sexual preferences may be of relevance in the treatment and prognosis of child sexual offenders. Previous research has indicated that this can be achieved by pattern classification of brain responses to sexual child and adult images. Our recent research showed that human face processing is tuned to sexual age preferences. This observation prompted us to test whether paedophilia can be inferred based on the haemodynamic brain responses to adult and child faces. Twenty-four men sexually attracted to prepubescent boys or girls (paedophiles) and 32 men sexually attracted to men or women (teleiophiles) were exposed to images of child and adult, male and female faces during a functional magnetic resonance imaging (fMRI) session. A cross-validated, automatic pattern classification algorithm of brain responses to facial stimuli yielded four misclassified participants (three false positives), corresponding to a specificity of 91% and a sensitivity of 95%. These results indicate that the functional response to facial stimuli can be reliably used for fMRI-based classification of paedophilia, bypassing the problem of showing child sexual stimuli to paedophiles.

  19. Tool independence for the Web Accessibility Quantitative Metric.

    PubMed

    Vigo, Markel; Brajnik, Giorgio; Arrue, Myriam; Abascal, Julio

    2009-07-01

    The Web Accessibility Quantitative Metric (WAQM) aims at accurately measuring the accessibility of web pages. One of the main features of WAQM among others is that it is evaluation tool independent for ranking and accessibility monitoring scenarios. This article proposes a method to attain evaluation tool independence for all foreseeable scenarios. After demonstrating that homepages have a more similar error profile than any other web page in a given web site, 15 homepages were measured with 10,000 different values of WAQM parameters using EvalAccess and LIFT, two automatic evaluation tools for accessibility. A similar procedure was followed with random pages and with several test files obtaining several tuples that minimise the difference between both tools. One thousand four hundred forty-nine web pages from 15 web sites were measured with these tuples and those values that minimised the difference between the tools were selected. Once the WAQM was tuned, the accessibility of 15 web sites was measured with two metrics for web sites, concluding that even if similar values can be produced, obtaining the same scores is undesirable since evaluation tools behave in a different way.

  20. Feature-based attention: it is all bottom-up priming

    PubMed Central

    Theeuwes, Jan

    2013-01-01

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717

  1. Communication Range Dynamics and Performance Analysis for a Self-Adaptive Transmission Power Controller †

    PubMed Central

    Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; del Toro Matamoros, Raúl M.

    2016-01-01

    The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value. PMID:27187397

  2. Optimizing the feedback control of Galvo scanners for laser manufacturing systems

    NASA Astrophysics Data System (ADS)

    Mirtchev, Theodore; Weeks, Robert; Minko, Sergey

    2010-06-01

    This paper summarizes the factors that limit the performance of moving-magnet galvo scanners driven by closed-loop digital servo amplifiers: torsional resonances, drifts, nonlinearities, feedback noise and friction. Then it describes a detailed Simulink® simulator that takes into account these factors and can be used to automatically tune the controller for best results with given galvo type and trajectory patterns. It allows for rapid testing of different control schemes, for instance combined position/velocity PID loops and displays the corresponding output in terms of torque, angular position and feedback sensor signal. The tool is configurable and can either use a dynamical state-space model of galvo's open-loop response, or can import the experimentally measured frequency domain transfer function. Next a drive signal digital pre-filtering technique is discussed. By performing a real-time Fourier analysis of the raw command signal it can be pre-warped to minimize all harmonics around the torsional resonances while boosting other non-resonant high frequencies. The optimized waveform results in much smaller overshoot and better settling time. Similar performance gain cannot be extracted from the servo controller alone.

  3. Working memory load eliminates the survival processing effect.

    PubMed

    Kroneisen, Meike; Rummel, Jan; Erdfelder, Edgar

    2014-01-01

    In a series of experiments, Nairne, Thompson, and Pandeirada (2007) demonstrated that words judged for their relevance to a survival scenario are remembered better than words judged for a scenario not relevant on a survival dimension. They explained this survival-processing effect by arguing that nature "tuned" our memory systems to process and remember fitness-relevant information. Kroneisen and Erdfelder (2011) proposed that it may not be survival processing per se that facilitates recall but the richness and distinctiveness with which information is encoded. To further test this account, we investigated how the survival processing effect is affected by cognitive load. If the survival processing effect is due to automatic processes or, alternatively, if survival processing is routinely prioritized in dual-task contexts, we would expect this effect to persist under cognitive load conditions. If the effect relies on cognitively demanding processes like richness and distinctiveness of encoding, however, the survival processing benefit should be hampered by increased cognitive load during encoding. Results were in line with the latter prediction, that is, the survival processing effect vanished under dual-task conditions.

  4. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  5. Position surveillance using one active ranging satellite and time-of-arrival of a signal from an independent satellite

    NASA Technical Reports Server (NTRS)

    Anderson, R. E.; Frey, R. L.; Lewis, J. R.

    1980-01-01

    Position surveillance using one active ranging/communication satellite and the time-of-arrival of signals from an independent satellite was shown to be feasible and practical. A towboat on the Mississippi River was equipped with a tone-code ranging transponder and a receiver tuned to the timing signals of the GOES satellite. A similar transponder was located at the office of the towing company. Tone-code ranging interrogations were transmitted from the General Electric Earth Station Laboratory through ATS-6 to the towboat and to the ground truth transponder office. Their automatic responses included digital transmissions of time-of-arrival measurements derived from the GOES signals. The Earth Station Laboratory determined ranges from the satellites to the towboat and computed position fixes. The ATS-6 lines-of-position were more precise than 0.1 NMi, 1 sigma, and the GOES lines-of-position were more precise than 1.6 NMi, 1 sigma. High quality voice communications were accomplished with the transponders using a nondirectional antenna on the towboat. The simple and effective surveillance technique merits further evaluation using operational maritime satellites.

  6. Subcortical encoding of sound is enhanced in bilinguals and relates to executive function advantages

    PubMed Central

    Krizman, Jennifer; Marian, Viorica; Shook, Anthony; Skoe, Erika; Kraus, Nina

    2012-01-01

    Bilingualism profoundly affects the brain, yielding functional and structural changes in cortical regions dedicated to language processing and executive function [Crinion J, et al. (2006) Science 312:1537–1540; Kim KHS, et al. (1997) Nature 388:171–174]. Comparatively, musical training, another type of sensory enrichment, translates to expertise in cognitive processing and refined biological processing of sound in both cortical and subcortical structures. Therefore, we asked whether bilingualism can also promote experience-dependent plasticity in subcortical auditory processing. We found that adolescent bilinguals, listening to the speech syllable [da], encoded the stimulus more robustly than age-matched monolinguals. Specifically, bilinguals showed enhanced encoding of the fundamental frequency, a feature known to underlie pitch perception and grouping of auditory objects. This enhancement was associated with executive function advantages. Thus, through experience-related tuning of attention, the bilingual auditory system becomes highly efficient in automatically processing sound. This study provides biological evidence for system-wide neural plasticity in auditory experts that facilitates a tight coupling of sensory and cognitive functions. PMID:22547804

  7. Using clustering and a modified classification algorithm for automatic text summarization

    NASA Astrophysics Data System (ADS)

    Aries, Abdelkrime; Oufaida, Houda; Nouali, Omar

    2013-01-01

    In this paper we describe a modified classification method destined for extractive summarization purpose. The classification in this method doesn't need a learning corpus; it uses the input text to do that. First, we cluster the document sentences to exploit the diversity of topics, then we use a learning algorithm (here we used Naive Bayes) on each cluster considering it as a class. After obtaining the classification model, we calculate the score of a sentence in each class, using a scoring model derived from classification algorithm. These scores are used, then, to reorder the sentences and extract the first ones as the output summary. We conducted some experiments using a corpus of scientific papers, and we have compared our results to another summarization system called UNIS.1 Also, we experiment the impact of clustering threshold tuning, on the resulted summary, as well as the impact of adding more features to the classifier. We found that this method is interesting, and gives good performance, and the addition of new features (which is simple using this method) can improve summary's accuracy.

  8. Testing inferior colliculus neurons for selectivity to the rate or duration of frequency modulated sweeps

    NASA Astrophysics Data System (ADS)

    Faure, Paul A.; Morrison, James A.; Valdizón-Rodríguez, Roberto

    2018-05-01

    Here we propose a method for testing how the responses of so-called "FM duration-tuned neurons (DTNs)" encode temporal properties of frequency modulated (FM) sweeps to determine if the responses of so-called "FM duration-tuned neurons (DTNs)" are tuned to FM rate or FM duration. Based on previous studies it was unclear if the responses of "FM DTNs" were tuned to signal duration, like pure-tone DTNs, or FM sweep rate. We tested this using single-unit extracellular recording in the inferior colliculus (IC) of the big brown bat (Eptesicus fuscus). We presented IC cells with linear FM sweeps that were varied in FM center frequency (CEF) and spectral bandwidth (BW) to measure the FM rate tuning responses of a cell. We also varied FM signal duration to measure the best duration (BD) and temporal BW of duration tuning of a cell. We then doubled (and halved) the best FM BW, while keeping the CEF constant, and remeasured the BD and temporal BW of duration tuning with FM bandwidth manipulated signals. We reasoned that the range of excitatory signal durations should not change in a true FM DTN whose responses are tuned to signal duration; however, when stimulated with bandwidth manipulated FM sounds the range of excitatory signal durations should predictably vary in a FM rate-tuned cell. Preliminary data indicate that our stimulus paradigm can disambiguate whether the evoked responses of an IC neuron are FM sweep rate tuned or FM duration tuned.

  9. Measurement of Beam Tunes in the Tevatron Using the BBQ System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edstrom, Dean R.; /Indiana U.

    Measuring the betatron tunes in any synchrotron is of critical importance to ensuring the stability of beam in the synchrotron. The Base Band Tune, or BBQ, measurement system was developed by Marek Gasior of CERN and has been installed at Brookhaven and Fermilab as a part of the LHC Accelerator Research Program, or LARP. The BBQ was installed in the Tevatron to evaluate its effectiveness at reading proton and antiproton tunes at its flattop energy of 980 GeV. The primary objectives of this thesis are to examine the methods used to measure the tune using the BBQ tune measurement system,more » to incorporate the system into the Fermilab accelerator controls system, ACNET, and to compare the BBQ to existing tune measurement systems in the Tevatron.« less

  10. Linear frequency tuning in an LC-resonant system using a C-V response controllable MEMS varactor

    NASA Astrophysics Data System (ADS)

    Han, Chang-Hoon; Yoon, Yong-Hoon; Ko, Seung-Deok; Seo, Min-Ho; Yoon, Jun-Bo

    2017-12-01

    This paper proposes a device level solution to achieve linear frequency tuning with respect to a tuning voltage ( V tune ) sweep in an inductor ( L)-capacitor ( C) resonant system. Since the linearity of the resonant frequency vs. tuning voltage ( f- V) relationship in an LC-resonant system is closely related to the C- V response characteristic of the varactor, we propose a C- V response tunable varactor to realize the linear frequency tuning. The proposed varactor was fabricated using microelectromechanical system (MEMS) surface micromachining. The fabricated MEMS varactor has the ability to dynamically change the C- V response characteristic according to a curve control voltage ( V curve- control ). When V curve- control was increased from zero to 9 V, the C- V response curve was changed from a linear to a concave form (i.e., the capacitance decreased quickly in the low tuning voltage region and slowly in the high tuning voltage region). This change in the C- V response characteristic resulted in a change in the f- V relationship, and we successfully demonstrated almost perfectly linear frequency tuning in the LC-resonant system, with a linearity factor of 99.95%.

  11. Tuning History: The French Experience

    ERIC Educational Resources Information Center

    Lamboley, Jean-Luc

    2017-01-01

    The paper shows that Tuning Project has generated indifference more than resistance within the French academic community. It proposes an analysis of the reasons of this situation: difficulties arising from Tuning itself, the resistance of the French academic tradition, the institutional inhibitors and facilitators. The impact of Tuning on French…

  12. The Art and Science of Climate Model Tuning

    DOE PAGES

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...

    2017-03-31

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  13. The Art and Science of Climate Model Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  14. Simultaneous gains tuning in boiler/turbine PID-based controller clusters using iterative feedback tuning methodology.

    PubMed

    Zhang, Shu; Taft, Cyrus W; Bentsman, Joseph; Hussey, Aaron; Petrus, Bryan

    2012-09-01

    Tuning a complex multi-loop PID based control system requires considerable experience. In today's power industry the number of available qualified tuners is dwindling and there is a great need for better tuning tools to maintain and improve the performance of complex multivariable processes. Multi-loop PID tuning is the procedure for the online tuning of a cluster of PID controllers operating in a closed loop with a multivariable process. This paper presents the first application of the simultaneous tuning technique to the multi-input-multi-output (MIMO) PID based nonlinear controller in the power plant control context, with the closed-loop system consisting of a MIMO nonlinear boiler/turbine model and a nonlinear cluster of six PID-type controllers. Although simplified, the dynamics and cross-coupling of the process and the PID cluster are similar to those used in a real power plant. The particular technique selected, iterative feedback tuning (IFT), utilizes the linearized version of the PID cluster for signal conditioning, but the data collection and tuning is carried out on the full nonlinear closed-loop system. Based on the figure of merit for the control system performance, the IFT is shown to deliver performance favorably comparable to that attained through the empirical tuning carried out by an experienced control engineer. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Musical experience sharpens human cochlear tuning.

    PubMed

    Bidelman, Gavin M; Nelms, Caitlin; Bhagat, Shaum P

    2016-05-01

    The mammalian cochlea functions as a filter bank that performs a spectral, Fourier-like decomposition on the acoustic signal. While tuning can be compromised (e.g., broadened with hearing impairment), whether or not human cochlear frequency resolution can be sharpened through experiential factors (e.g., training or learning) has not yet been established. Previous studies have demonstrated sharper psychophysical tuning curves in trained musicians compared to nonmusicians, implying superior peripheral tuning. However, these findings are based on perceptual masking paradigms, and reflect engagement of the entire auditory system rather than cochlear tuning, per se. Here, by directly mapping physiological tuning curves from stimulus frequency otoacoustic emissions (SFOAEs)-cochlear emitted sounds-we show that estimates of human cochlear tuning in a high-frequency cochlear region (4 kHz) is further sharpened (by a factor of 1.5×) in musicians and improves with the number of years of their auditory training. These findings were corroborated by measurements of psychophysical tuning curves (PTCs) derived via simultaneous masking, which similarly showed sharper tuning in musicians. Comparisons between SFOAE and PTCs revealed closer correspondence between physiological and behavioral curves in musicians, indicating that tuning is also more consistent between different levels of auditory processing in trained ears. Our findings demonstrate an experience-dependent enhancement in the resolving power of the cochlear sensory epithelium and the spectral resolution of human hearing and provide a peripheral account for the auditory perceptual benefits observed in musicians. Both local and feedback (e.g., medial olivocochlear efferent) mechanisms are discussed as potential mechanisms for experience-dependent tuning. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. How safe is tuning a radio?: using the radio tuning task as a benchmark for distracted driving.

    PubMed

    Lee, Ja Young; Lee, John D; Bärgman, Jonas; Lee, Joonbum; Reimer, Bryan

    2018-01-01

    Drivers engage in non-driving tasks while driving, such as interactions entertainment systems. Studies have identified glance patterns related to such interactions, and manual radio tuning has been used as a reference task to set an upper bound on the acceptable demand of interactions. Consequently, some view the risk associated with radio tuning as defining the upper limit of glance measures associated with visual-manual in-vehicle activities. However, we have little knowledge about the actual degree of crash risk that radio tuning poses and, by extension, the risk of tasks that have similar glance patterns as the radio tuning task. In the current study, we use counterfactual simulation to take the glance patterns for manual radio tuning tasks from an on-road experiment and apply these patterns to lead-vehicle events observed in naturalistic driving studies. We then quantify how often the glance patterns from radio tuning are associated with rear-end crashes, compared to driving only situations. We used the pre-crash kinematics from 34 crash events from the SHRP2 naturalistic driving study to investigate the effect of radio tuning in crash-imminent situations, and we also investigated the effect of radio tuning on 2,475 routine braking events from the Safety Pilot project. The counterfactual simulation showed that off-road glances transform some near-crashes that could have been avoided into crashes, and glance patterns observed in on-road radio tuning experiment produced 2.85-5.00 times more crashes than baseline driving. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. 21 CFR 882.1525 - Tuning fork.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Tuning fork. 882.1525 Section 882.1525 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Diagnostic Devices § 882.1525 Tuning fork. (a) Identification. A tuning fork...

  18. 21 CFR 882.1525 - Tuning fork.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Tuning fork. 882.1525 Section 882.1525 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Diagnostic Devices § 882.1525 Tuning fork. (a) Identification. A tuning fork...

  19. Tuning Higher Education

    NASA Astrophysics Data System (ADS)

    Carroll, Bradley

    2011-03-01

    In April 2009, the Lumina Foundation launched its Tuning USA project. Faculty teams in selected disciplines from Indiana, Minnesota, and Utah started pilot Tuning programs at their home institutions. Using Europe's Bologna Process as a guide, Utah physicists worked to reach a consensus about the knowledge and skills that should characterize the 2-year, batchelor's, and master's degree levels. I will share my experience as a member of Utah's physics Tuning team, and describe our progress, frustrations, and evolving understanding of the Tuning project's history, methods, and goals.

  20. Self-Tuning Impact Damper for Rotating Blades

    NASA Technical Reports Server (NTRS)

    Pufy, Kirsten P. (Inventor); Brown, Gerald V. (Inventor); Bagley, Ronald L. (Inventor)

    2004-01-01

    A self-tuning impact damper is disclosed that absorbs and dissipates vibration energy in the blades of rotors in compressors and/or turbines thereby dramatically extending their service life and operational readiness. The self-tuning impact damper uses the rotor speed to tune the resonant frequency of a rattling mass to an engine order excitation frequency. The rating mass dissipates energy through collisions between the rattling mass and the walls of a cavity of the self-tuning impact damper, as well as though friction between the rattling mass and the base of the cavity. In one embodiment, the self-tuning impact damper has a ball-in-trough configuration with tire ball serving as the rattling mass.

  1. Stay tuned: active amplification tunes tree cricket ears to track temperature-dependent song frequency.

    PubMed

    Mhatre, Natasha; Pollack, Gerald; Mason, Andrew

    2016-04-01

    Tree cricket males produce tonal songs, used for mate attraction and male-male interactions. Active mechanics tunes hearing to conspecific song frequency. However, tree cricket song frequency increases with temperature, presenting a problem for tuned listeners. We show that the actively amplified frequency increases with temperature, thus shifting mechanical and neuronal auditory tuning to maintain a match with conspecific song frequency. Active auditory processes are known from several taxa, but their adaptive function has rarely been demonstrated. We show that tree crickets harness active processes to ensure that auditory tuning remains matched to conspecific song frequency, despite changing environmental conditions and signal characteristics. Adaptive tuning allows tree crickets to selectively detect potential mates or rivals over large distances and is likely to bestow a strong selective advantage by reducing mate-finding effort and facilitating intermale interactions. © 2016 The Author(s).

  2. Musician's and Physicist's View on Tuning Keyboard Instruments

    ERIC Educational Resources Information Center

    Lubenow, Martin; Meyn, Jan-Peter

    2007-01-01

    The simultaneous sound of several voices or instruments requires proper tuning to achieve consonance for certain intervals and chords. Most instruments allow enough frequency variation to enable pure tuning while being played. Keyboard instruments such as organ and piano have given frequencies for individual notes and the tuning must be based on a…

  3. Implementation through Innovation: A Literature-Based Analysis of the Tuning Project

    ERIC Educational Resources Information Center

    Pálvölgyi, Krisztián

    2017-01-01

    Tuning Educational Structures in Europe is perhaps the most important higher education innovation platform nowadays. The main objective of the Tuning Project is to develop a tangible approach to implement the action lines of the Bologna Process; thus, implementation and innovation are closely linked in Tuning. However, during its development,…

  4. Auditory steady-state evoked potentials vs. compound action potentials for the measurement of suppression tuning curves in the sedated dog puppy.

    PubMed

    Markessis, Emily; Poncelet, Luc; Colin, Cécile; Hoonhorst, Ingrid; Collet, Grégory; Deltenre, Paul; Moore, Brian C J

    2010-06-01

    Auditory steady-state evoked potential (ASSEP) tuning curves were compared to compound action potential (CAP) tuning curves, both measured at 2 Hz, using sedated beagle puppies. The effect of two types of masker (narrowband noise and sinusoidal) on the tuning curve parameters was assessed. Whatever the masker type, CAP tuning curve parameters were qualitatively and quantitatively similar to the ASSEP ones, with a similar inter-subject variability, but with a greater incidence of upward tip displacement. Whatever the procedure, sinusoidal maskers produced sharper tuning curves than narrow-band maskers. Although these differences are not likely to have significant implications for clinical work, from a fundamental point of view, their origin requires further investigations. The same amount of time was needed to record a CAP and an ASSEP 13-point tuning curve. The data further validate the ASSEP technique, which has the advantages of having a smaller tendency to produce upward tip shifts than the CAP technique. Moreover, being non invasive, ASSEP tuning curves can be easily repeated over time in the same subject for clinical and research purposes.

  5. Frequency Tuning of Vibration Absorber Using Topology Optimization

    NASA Astrophysics Data System (ADS)

    Harel, Swapnil Subhash

    A tuned mass absorber is a system for reducing the amplitude in one oscillator by coupling it to a second oscillator. If tuned correctly, the maximum amplitude of the first oscillator in response to a periodic driver will be lowered, and much of the vibration will be 'transferred' to the second oscillator. The tuned vibration absorber (TVA) has been utilized for vibration control purposes in many sectors of Civil/Automotive/Aerospace Engineering for many decades since its inception. Time and again we come across a situation in which a vibratory system is required to run near resonance. In the past, approaches have been made to design such auxiliary spring mass tuned absorbers for the safety of the structures. This research focuses on the development and optimization of continuously tuned mass absorbers as a substitute to the discretely tuned mass absorbers (spring- mass system). After conducting the study of structural behavior, the boundary condition and frequency to which the absorber is to be tuned are determined. The Modal analysis approach is used to determine mode shapes and frequencies. The absorber is designed and optimized using the topology optimization tool, which simultaneously designs, optimizes and tunes the absorber to the desired frequency. The tuned, optimized absorber, after post processing, is attached to the target structure. The number of the absorbers are increased to amplify bandwidth and thereby upgrade the safety of structure for a wide range of frequency. The frequency response analysis is carried out using various combinations of structure and number of absorber cell.

  6. Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829

  7. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    PubMed

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  8. Two-Dimensional Cochlear Micromechanics Measured In Vivo Demonstrate Radial Tuning within the Mouse Organ of Corti

    PubMed Central

    Lee, Hee Yoon; Raphael, Patrick D.; Xia, Anping; Kim, Jinkyung; Grillet, Nicolas; Applegate, Brian E.; Ellerbee Bowden, Audrey K.

    2016-01-01

    The exquisite sensitivity and frequency discrimination of mammalian hearing underlie the ability to understand complex speech in noise. This requires force generation by cochlear outer hair cells (OHCs) to amplify the basilar membrane traveling wave; however, it is unclear how amplification is achieved with sharp frequency tuning. Here we investigated the origin of tuning by measuring sound-induced 2-D vibrations within the mouse organ of Corti in vivo. Our goal was to determine the transfer function relating the radial shear between the structures that deflect the OHC bundle, the tectorial membrane and reticular lamina, to the transverse motion of the basilar membrane. We found that, after normalizing their responses to the vibration of the basilar membrane, the radial vibrations of the tectorial membrane and reticular lamina were tuned. The radial tuning peaked at a higher frequency than transverse basilar membrane tuning in the passive, postmortem condition. The radial tuning was similar in dead mice, indicating that this reflected passive, not active, mechanics. These findings were exaggerated in TectaC1509G/C1509G mice, where the tectorial membrane is detached from OHC stereocilia, arguing that the tuning of radial vibrations within the hair cell epithelium is distinct from tectorial membrane tuning. Together, these results reveal a passive, frequency-dependent contribution to cochlear filtering that is independent of basilar membrane filtering. These data argue that passive mechanics within the organ of Corti sharpen frequency selectivity by defining which OHCs enhance the vibration of the basilar membrane, thereby tuning the gain of cochlear amplification. SIGNIFICANCE STATEMENT Outer hair cells amplify the traveling wave within the mammalian cochlea. The resultant gain and frequency sharpening are necessary for speech discrimination, particularly in the presence of background noise. Here we measured the 2-D motion of the organ of Corti in mice and found that the structures that stimulate the outer hair cell stereocilia, the tectorial membrane and reticular lamina, were sharply tuned in the radial direction. Radial tuning was similar in dead mice and in mice lacking a tectorial membrane. This suggests that radial tuning comes from passive mechanics within the hair cell epithelium, and that these mechanics, at least in part, may tune the gain of cochlear amplification. PMID:27488636

  9. [History of the tuning fork. I: Invention of the tuning fork, its course in music and natural sciences. Pictures from the history of otorhinolaryngology, presented by instruments from the collection of the Ingolstadt German Medical History Museum].

    PubMed

    Feldmann, H

    1997-02-01

    G. Cardano, physician, mathematician, and astrologer in Pavia, Italy, in 1550 described how sound may be perceived through the skull. A few years later H. Capivacci, also a physician in Padua, realized that this phenomenon might be used as a diagnostic tool for differentiating between hearing disorders located either in the middle ear or in the acoustic nerve. The German physician G. C. Schelhammer in 1684 was the first to use a common cutlery fork in further developing the experiments initiated by Cardano and Capivacci. For a long time to come, however, there was no demand for this in practical otology. The tuning fork was invented in 1711 by John Shore, trumpeter and lutenist to H. Purcell and G.F. Händel in London. A picture of Händel's own tuning fork, probably the oldest tuning fork in existence, is presented here for the first time. There are a number of anecdotes connected with the inventor of the tuning fork, using plays on words involving the name Shore, and mixing up pitch-pipe and pitchfork. Some of these are related here. The tuning fork as a musical instrument soon became a success throughout Europe. The German physicist E. F. F. Chladni in Wittenberg around 1800 was the first to systematically investigate the mode of vibration of the tuning fork with its nodal points. Besides this, he and others tried to construct a complete musical instrument based on sets of tuning forks, which, however, were not widely accepted. J. H. Scheibler in Germany in 1834 presented a set of 54 tuning forks covering the range from 220 Hz to 440 Hz, at intervals of 4 Hz. J. Lissajous in Paris constructed a very elaborate tuning fork with a resonance box, which was intended to represent the international standard of the musical note A with 435 vibrations per second, but this remained controversial. K. R. Koenig, a German physicist living in Paris, invented a tuning fork which was kept in continuous vibration by a clockwork. H. Helmholtz, physiologist in Heidelberg, in 1863 used sets of electromagnetically powered tuning forks for his famous experiments on the sensations of tone. Until the invention of the electronic valve, tuning forks remained indispensible instruments for producing defined sinusoidal vibrations. The history of this development is presented in detail. The diagnostic use of the tuning fork in otology will be described in a separate article.

  10. Tuning characteristics of narrowband THz radiation generated via optical rectification in periodically poled lithium niobate.

    PubMed

    Weiss, C; Torosyan, G; Meyn, J P; Wallenstein, R; Beigang, R; Avetisyan, Y

    2001-04-23

    The tuning properties of pulsed narrowband THz radiation generated via optical rectification in periodically poled lithium niobate have been investigated. Using a disk-shaped periodically poled crystal tuning was easily accomplished by rotating the crystal around its axis and observing the generated THz radiation in forward direction. In this way no beam deflection during tuning was observed. The total tuning range extended from 180 GHz up to 830 GHz and was limited by the poling period of 127 microm which determines the maximum THz frequency in forward direction.

  11. Twist-induced tuning in tapered fiber couplers.

    PubMed

    Birks, T A

    1989-10-01

    The power-splitting ratio of fused tapered single-mode fiber couplers can be reversibly tuned by axial twisting without affecting loss. The twist-tuning behavior of a range of different tapered couplers is described. A simple expression for twist-tuning can be derived by representing the effects of twist by a change in the refractive index profile. Good agreement between this expression and experimental results is demonstrated. Repeated tuning over tens of thousands of cycles is found not to degrade coupler performance, and a number of practical applications, including a freely tunable tapered coupler, are described.

  12. Otoacoustic Estimates of Cochlear Tuning: Testing Predictions in Macaque

    NASA Astrophysics Data System (ADS)

    Shera, Christopher A.; Bergevin, Christopher; Kalluri, Radha; Mc Laughlin, Myles; Michelet, Pascal; van der Heijden, Marcel; Joris, Philip X.

    2011-11-01

    Otoacoustic estimates of cochlear frequency selectivity suggest substantially sharper tuning in humans. However, the logic and methodology underlying these estimates remain untested by direct measurements in primates. We report measurements of frequency tuning in macaque monkeys, Old-World primates phylogenetically closer to humans than the small laboratory animals often taken as models of human hearing (e.g., cats, guinea pigs, and chinchillas). We find that measurements of tuning obtained directly from individual nerve fibers and indirectly using otoacoustic emissions both indicate that peripheral frequency selectivity in macaques is significantly sharper than in small laboratory animals, matching that inferred for humans at high frequencies. Our results validate the use of otoacoustic emissions for noninvasive measurement of cochlear tuning and corroborate the finding of sharper tuning in humans.

  13. V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening.

    PubMed

    Teich, Andrew F; Qian, Ning

    2010-03-01

    Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.

  14. Grid cell spatial tuning reduced following systemic muscarinic receptor blockade

    PubMed Central

    Newman, Ehren L.; Climer, Jason R.; Hasselmo, Michael E.

    2014-01-01

    Grid cells of the medial entorhinal cortex exhibit a periodic and stable pattern of spatial tuning that may reflect the output of a path integration system. This grid pattern has been hypothesized to serve as a spatial coordinate system for navigation and memory function. The mechanisms underlying the generation of this characteristic tuning pattern remain poorly understood. Systemic administration of the muscarinic antagonist scopolamine flattens the typically positive correlation between running speed and entorhinal theta frequency in rats. The loss of this neural correlate of velocity, an important signal for the calculation of path integration, raises the question of what influence scopolamine has on the grid cell tuning as a read out of the path integration system. To test this, the spatial tuning properties of grid cells were compared before and after systemic administration of scopolamine as rats completed laps on a circle track for food rewards. The results show that the spatial tuning of the grid cells was reduced following scopolamine administration. The tuning of head direction cells, in contrast, was not reduced by scopolamine. This is the first report to demonstrate a link between cholinergic function and grid cell tuning. This work suggests that the loss of tuning in the grid cell network may underlie the navigational disorientation observed in Alzheimer's patients and elderly individuals with reduced cholinergic tone. PMID:24493379

  15. Implicit Learning of Nonlocal Musical Rules: A Comment on Kuhn and Dienes (2005)

    ERIC Educational Resources Information Center

    Desmet, Charlotte; Poulin-Charronnat, Benedicte; Lalitte, Philippe; Perruchet, Pierre

    2009-01-01

    In a recent study, G. Kuhn and Z. Dienes (2005) reported that participants previously exposed to a set of musical tunes generated by a biconditional grammar subsequently preferred new tunes that respected the grammar over new ungrammatical tunes. Because the study and test tunes did not share any chunks of adjacent intervals, this result may be…

  16. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    NASA Technical Reports Server (NTRS)

    Park, Junhong; Palumbo, Daniel L.

    2004-01-01

    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  17. Selective enhancement of orientation tuning before saccades.

    PubMed

    Ohl, Sven; Kuper, Clara; Rolfs, Martin

    2017-11-01

    Saccadic eye movements cause a rapid sweep of the visual image across the retina and bring the saccade's target into high-acuity foveal vision. Even before saccade onset, visual processing is selectively prioritized at the saccade target. To determine how this presaccadic attention shift exerts its influence on visual selection, we compare the dynamics of perceptual tuning curves before movement onset at the saccade target and in the opposite hemifield. Participants monitored a 30-Hz sequence of randomly oriented gratings for a target orientation. Combining a reverse correlation technique previously used to study orientation tuning in neurons and general additive mixed modeling, we found that perceptual reports were tuned to the target orientation. The gain of orientation tuning increased markedly within the last 100 ms before saccade onset. In addition, we observed finer orientation tuning right before saccade onset. This increase in gain and tuning occurred at the saccade target location and was not observed at the incongruent location in the opposite hemifield. The present findings suggest, therefore, that presaccadic attention exerts its influence on vision in a spatially and feature-selective manner, enhancing performance and sharpening feature tuning at the future gaze location before the eyes start moving.

  18. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    NASA Astrophysics Data System (ADS)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  19. Adaptive tuned vibration absorber based on magnetorheological elastomer-shape memory alloy composite

    NASA Astrophysics Data System (ADS)

    Kumbhar, Samir B.; Chavan, S. P.; Gawade, S. S.

    2018-02-01

    Shape memory alloy (SMA) is an attractive smart material which could be used as stiffness tuning element in adaptive tuned vibration absorber (ATVA). The sharp modulus change in SMA material during phase transformation creates difficulties for smooth tuning to track forcing frequency to minimize vibrations of primary system. However, high hysteresis damping at low temperature martensitic phase degrades performance of vibration absorber. This paper deals with the study of dynamic response of system in which SMA and magnetorheological elastomer (MRE) are combined together to act as a smart spring- mass-damper system in a tuned vibration absorber. This composite is used as two way stiffness tuning element in ATVA for smooth and continuous tuning and to minimize the adverse effect at low temperature by increasing equivalent stiffness. The stiffnesses of SMA element and MRE are varied respectively by changing temperature and strength of external magnetic field. The two way stiffness tuning ability and adaptivity have been demonstrated analytically and experimentally. The experimental results show good agreement with analytical results. The proposed composite is able to shift the stiffness consequently the natural frequency of primary system as well as reduce the vibration level of primary system by substantial mount.

  20. Automatic machine-learning based identification of jogging periods from accelerometer measurements of adolescents under field conditions

    PubMed Central

    Risteska Stojkoska, Biljana; Standl, Marie; Schulz, Holger

    2017-01-01

    Background Assessment of health benefits associated with physical activity depend on the activity duration, intensity and frequency, therefore their correct identification is very valuable and important in epidemiological and clinical studies. The aims of this study are: to develop an algorithm for automatic identification of intended jogging periods; and to assess whether the identification performance is improved when using two accelerometers at the hip and ankle, compared to when using only one at either position. Methods The study used diarized jogging periods and the corresponding accelerometer data from thirty-nine, 15-year-old adolescents, collected under field conditions, as part of the GINIplus study. The data was obtained from two accelerometers placed at the hip and ankle. Automated feature engineering technique was performed to extract features from the raw accelerometer readings and to select a subset of the most significant features. Four machine learning algorithms were used for classification: Logistic regression, Support Vector Machines, Random Forest and Extremely Randomized Trees. Classification was performed using only data from the hip accelerometer, using only data from ankle accelerometer and using data from both accelerometers. Results The reported jogging periods were verified by visual inspection and used as golden standard. After the feature selection and tuning of the classification algorithms, all options provided a classification accuracy of at least 0.99, independent of the applied segmentation strategy with sliding windows of either 60s or 180s. The best matching ratio, i.e. the length of correctly identified jogging periods related to the total time including the missed ones, was up to 0.875. It could be additionally improved up to 0.967 by application of post-classification rules, which considered the duration of breaks and jogging periods. There was no obvious benefit of using two accelerometers, rather almost the same performance could be achieved from either accelerometer position. Conclusions Machine learning techniques can be used for automatic activity recognition, as they provide very accurate activity recognition, significantly more accurate than when keeping a diary. Identification of jogging periods in adolescents can be performed using only one accelerometer. Performance-wise there is no significant benefit from using accelerometers on both locations. PMID:28880923

  1. Classification of cardiovascular tissues using LBP based descriptors and a cascade SVM.

    PubMed

    Mazo, Claudia; Alegre, Enrique; Trujillo, Maria

    2017-08-01

    Histological images have characteristics, such as texture, shape, colour and spatial structure, that permit the differentiation of each fundamental tissue and organ. Texture is one of the most discriminative features. The automatic classification of tissues and organs based on histology images is an open problem, due to the lack of automatic solutions when treating tissues without pathologies. In this paper, we demonstrate that it is possible to automatically classify cardiovascular tissues using texture information and Support Vector Machines (SVM). Additionally, we realised that it is feasible to recognise several cardiovascular organs following the same process. The texture of histological images was described using Local Binary Patterns (LBP), LBP Rotation Invariant (LBPri), Haralick features and different concatenations between them, representing in this way its content. Using a SVM with linear kernel, we selected the more appropriate descriptor that, for this problem, was a concatenation of LBP and LBPri. Due to the small number of the images available, we could not follow an approach based on deep learning, but we selected the classifier who yielded the higher performance by comparing SVM with Random Forest and Linear Discriminant Analysis. Once SVM was selected as the classifier with a higher area under the curve that represents both higher recall and precision, we tuned it evaluating different kernels, finding that a linear SVM allowed us to accurately separate four classes of tissues: (i) cardiac muscle of the heart, (ii) smooth muscle of the muscular artery, (iii) loose connective tissue, and (iv) smooth muscle of the large vein and the elastic artery. The experimental validation was conducted using 3000 blocks of 100 × 100 sized pixels, with 600 blocks per class and the classification was assessed using a 10-fold cross-validation. using LBP as the descriptor, concatenated with LBPri and a SVM with linear kernel, the main four classes of tissues were recognised with an AUC higher than 0.98. A polynomial kernel was then used to separate the elastic artery and vein, yielding an AUC in both cases superior to 0.98. Following the proposed approach, it is possible to separate with very high precision (AUC greater than 0.98) the fundamental tissues of the cardiovascular system along with some organs, such as the heart, arteries and veins. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Implications for New Physics from Fine-Tuning Arguments: II. Little Higgs Models

    NASA Astrophysics Data System (ADS)

    Casas, J. A.; Espinosa, J. R.; Hidalgo, I.

    2005-03-01

    We examine the fine-tuning associated to electroweak breaking in Little Higgs scenarios and find it to be always substantial and, generically, much higher than suggested by the rough estimates usually made. This is due to implicit tunings between parameters that can be overlooked at first glance but show up in a more systematic analysis. Focusing on four popular and representative Little Higgs scenarios, we find that the fine-tuning is essentially comparable to that of the Little Hierarchy problem of the Standard Model (which these scenarios attempt to solve) and higher than in supersymmetric models. This does not demonstrate that all Little Higgs models are fine-tuned, but stresses the need of a careful analysis of this issue in model-building before claiming that a particular model is not fine-tuned. In this respect we identify the main sources of potential fine-tuning that should be watched out for, in order to construct a successful Little Higgs model, which seems to be a non-trivial goal.

  3. Design and verification of wide-band, simultaneous, multi-frequency, tuning circuits for large moment transmitter loops

    NASA Astrophysics Data System (ADS)

    Dvorak, Steven L.; Sternberg, Ben K.; Feng, Wanjie

    2017-03-01

    In this paper we discuss the design and verification of wide-band, multi-frequency, tuning circuits for large-moment Transmitter (TX) loops. Since these multi-frequency, tuned-TX loops allow for the simultaneous transmission of multiple frequencies at high-current levels, they are ideally suited for frequency-domain geophysical systems that collect data while moving, such as helicopter mounted systems. Furthermore, since multi-frequency tuners use the same TX loop for all frequencies, instead of using separate tuned-TX loops for each frequency, they allow for the use of larger moment TX loops. In this paper we discuss the design and simulation of one- and three-frequency tuned TX loops and then present measurement results for a three-frequency, tuned-TX loop.

  4. Note: Enhanced energy harvesting from low-frequency magnetic fields utilizing magneto-mechano-electric composite tuning-fork.

    PubMed

    Yang, Aichao; Li, Ping; Wen, Yumei; Yang, Chao; Wang, Decai; Zhang, Feng; Zhang, Jiajia

    2015-06-01

    A magnetic-field energy harvester using a low-frequency magneto-mechano-electric (MME) composite tuning-fork is proposed. This MME composite tuning-fork consists of a copper tuning fork with piezoelectric Pb(Zr(1-x)Ti(x))O3 (PZT) plates bonded near its fixed end and with NdFeB magnets attached at its free ends. Due to the resonance coupling between fork prongs, the MME composite tuning-fork owns strong vibration and high Q value. Experimental results show that the proposed magnetic-field energy harvester using the MME composite tuning-fork exhibits approximately 4 times larger maximum output voltage and 7.2 times higher maximum power than the conventional magnetic-field energy harvester using the MME composite cantilever.

  5. An approach to control tuning range and speed in 1D ternary photonic band gap material nano-layered optical filter structures electro-optically

    NASA Astrophysics Data System (ADS)

    Zia, Shahneel; Banerjee, Anirudh

    2016-05-01

    This paper demonstrates a way to control spectrum tuning capability in one-dimensional (1D) ternary photonic band gap (PBG) material nano-layered structures electro-optically. It is shown that not only tuning range, but also tuning speed of tunable optical filters based on 1D ternary PBG structures can be controlled Electro-optically. This approach finds application in tuning range enhancement of 1D Ternary PBG structures and compensating temperature sensitive transmission spectrum shift in 1D Ternary PBG structures.

  6. Note: Wide band amplifier for quartz tuning fork sensors with digitally controlled stray capacitance compensation.

    PubMed

    Peng, Ping; Hao, Lifeng; Ding, Ning; Jiao, Weicheng; Wang, Qi; Zhang, Jian; Wang, Rongguo

    2015-11-01

    We presented a preamplifier design for quartz tuning fork (QTF) sensors in which the stray capacitance is digitally compensated. In this design, the manually controlled variable capacitor is replaced by a pair of varicap diodes, whose capacitance could be accurately tuned by a bias voltage. A tuning circuit including a single side low power operational amplifier, a digital-to-analog converter, and a microprocessor is also described, and the tuning process can be conveniently carried out on a personal computer. For the design, the noise level was investigated experimentally.

  7. An approach to control tuning range and speed in 1D ternary photonic band gap material nano-layered optical filter structures electro-optically

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zia, Shahneel, E-mail: shahneelzia@gmail.com; Banerjee, Anirudh, E-mail: abanerjee@amity.edu

    2016-05-06

    This paper demonstrates a way to control spectrum tuning capability in one-dimensional (1D) ternary photonic band gap (PBG) material nano-layered structures electro-optically. It is shown that not only tuning range, but also tuning speed of tunable optical filters based on 1D ternary PBG structures can be controlled Electro-optically. This approach finds application in tuning range enhancement of 1D Ternary PBG structures and compensating temperature sensitive transmission spectrum shift in 1D Ternary PBG structures.

  8. A novel technique for tuning of co-axial cavity of multi-beam klystron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, Sukalyan, E-mail: sstechno18@gmail.com; Bandyopadhyay, Ayan Kumar; Pal, Debashis

    2016-03-09

    Multi-beam Klystrons (MBKs) have gained wide acceptances in the research sector for its inherent advantages. But developing a robust tuning technique for an MBK cavity of coaxial type has still remained a challenge as these designs are very prone to suffer from asymmetric field distribution with inductive tuning of the cavity. Such asymmetry leads to inhomogeneous beam-wave interaction, an undesirable phenomenon. Described herein is a new type of coaxial cavity that has the ability to suppress the asymmetry, thereby allowing tuning of the cavity with a single tuning post.

  9. Astronomical tunings of the Oligocene-Miocene transition from Pacific Ocean Site U1334 and implications for the carbon cycle

    NASA Astrophysics Data System (ADS)

    Beddow, Helen M.; Liebrand, Diederik; Wilson, Douglas S.; Hilgen, Frits J.; Sluijs, Appy; Wade, Bridget S.; Lourens, Lucas J.

    2018-03-01

    Astronomical tuning of sediment sequences requires both unambiguous cycle pattern recognition in climate proxy records and astronomical solutions, as well as independent information about the phase relationship between these two. Here we present two different astronomically tuned age models for the Oligocene-Miocene transition (OMT) from Integrated Ocean Drilling Program Site U1334 (equatorial Pacific Ocean) to assess the effect tuning has on astronomically calibrated ages and the geologic timescale. These alternative age models (roughly from ˜ 22 to ˜ 24 Ma) are based on different tunings between proxy records and eccentricity: the first age model is based on an aligning CaCO3 weight (wt%) to Earth's orbital eccentricity, and the second age model is based on a direct age calibration of benthic foraminiferal stable carbon isotope ratios (δ13C) to eccentricity. To independently test which tuned age model and associated tuning assumptions are in best agreement with independent ages based on tectonic plate-pair spreading rates, we assign the tuned ages to magnetostratigraphic reversals identified in deep-marine magnetic anomaly profiles. Subsequently, we compute tectonic plate-pair spreading rates based on the tuned ages. The resultant alternative spreading-rate histories indicate that the CaCO3 tuned age model is most consistent with a conservative assumption of constant, or linearly changing, spreading rates. The CaCO3 tuned age model thus provides robust ages and durations for polarity chrons C6Bn.1n-C7n.1r, which are not based on astronomical tuning in the latest iteration of the geologic timescale. Furthermore, it provides independent evidence that the relatively large (several 10 000 years) time lags documented in the benthic foraminiferal isotope records relative to orbital eccentricity constitute a real feature of the Oligocene-Miocene climate system and carbon cycle. The age constraints from Site U1334 thus indicate that the delayed responses of the Oligocene-Miocene climate-cryosphere system and (marine) carbon cycle resulted from highly non-linear feedbacks to astronomical forcing.

  10. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  11. Seismic design of passive tuned mass damper parameters using active control algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Ming; Shia, Syuan; Lai, Yong-An

    2018-07-01

    Tuned mass dampers are a widely-accepted control method to effectively reduce the vibrations of tall buildings. A tuned mass damper employs a damped harmonic oscillator with specific dynamic characteristics, thus the response of structures can be regulated by the additive dynamics. The additive dynamics are, however, similar to the feedback control system in active control. Therefore, the objective of this study is to develop a new tuned mass damper design procedure based on the active control algorithm, i.e., the H2/LQG control. This design facilitates the similarity of feedback control in the active control algorithm to determine the spring and damper in a tuned mass damper. Given a mass ratio between the damper and structure, the stiffness and damping coefficient of the tuned mass damper are derived by minimizing the response objective function of the primary structure, where the structural properties are known. Varying a single weighting in this objective function yields the optimal TMD design when the minimum peak in the displacement transfer function of the structure with the TMD is met. This study examines various objective functions as well as derives the associated equations to compute the stiffness and damping coefficient. The relationship between the primary structure and optimal tuned mass damper is parametrically studied. Performance is evaluated by exploring the h2-and h∞-norms of displacements and accelerations of the primary structure. In time-domain analysis, the damping effectiveness of the tune mass damper controlled structures is investigated under impulse excitation. Structures with the optimal tuned mass dampers are also assessed under seismic excitation. As a result, the proposed design procedure produces an effective tuned mass damper to be employed in a structure against earthquakes.

  12. Tuning Features of Chinese Folk Song Singing: A Case Study of Hua'er Music.

    PubMed

    Yang, Yang; Welch, Graham; Sundberg, Johan; Himonides, Evangelos

    2015-07-01

    The learning and teaching of different singing styles, such as operatic and Chinese folk singing, was often found to be very challenging in professional music education because of the complexity of varied musical properties and vocalizations. By studying the acoustical and musical parameters of the singing voice, this study identified distinctive tuning characteristics of a particular folk music in China-Hua'er music-to inform the ineffective folk singing practices, which were hampered by the neglect of inherent tuning issues in music. Thirteen unaccompanied folk song examples from four folk singers were digitally audio recorded in a sound studio. Using an analyzing toolkit consisting of Praat, PeakFit, and MS Excel, the fundamental frequencies (F0) of these song examples were extracted into sets of "anchor pitches" mostly used, which were further divided into 253 F0 clusters. The interval structures of anchor pitches within each song were analyzed and then compared across 13 examples providing parameters that indicate the tuning preference of this particular singing style. The data analyses demonstrated that all singers used a tuning pattern consisting of five major anchor pitches suggesting a nonequal-tempered bias in singing. This partly verified the pentatonic scale proposed in previous empirical research but also argued a potential misunderstanding of the studied folk music scale that failed to take intrinsic tuning issues into consideration. This study suggests that, in professional music training, any tuning strategy should be considered in terms of the reference pitch and likely tuning systems. Any accompanying instruments would need to be tuned to match the underlying tuning bias. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Software feedback for monochromator tuning at UNICAT (abstract)

    NASA Astrophysics Data System (ADS)

    Jemian, Pete R.

    2002-03-01

    Automatic tuning of double-crystal monochromators presents an interesting challenge in software. The goal is to either maximize, or hold constant, the throughput of the monochromator. An additional goal of the software feedback is to disable itself when there is no beam and then, at the user's discretion, re-enable itself when the beam returns. These and other routine goals, such as adherence to limits of travel for positioners, are maintained by software controls. Many solutions exist to lock in and maintain a fixed throughput. Among these include a hardware solution involving a wave form generator, and a lock-in amplifier to autocorrelate the movement of a piezoelectric transducer (PZT) providing fine adjustment of the second crystal Bragg angle. This solution does not work when the positioner is a slow acting device such as a stepping motor. Proportional integral differential (PID) loops have been used to provide feedback through software but additional controls must be provided to maximize the monochromator throughput. Presented here is a software variation of the PID loop which meets the above goals. By using two floating point variables as inputs, representing the intensity of x rays measured before and after the monochromator, it attempts to maximize (or hold constant) the ratio of these two inputs by adjusting an output floating point variable. These floating point variables are connected to hardware channels corresponding to detectors and positioners. When the inputs go out of range, the software will stop making adjustments to the control output. Not limited to monochromator feedback, the software could be used, with beam steering positioners, to maintain a measure of beam position. Advantages of this software feedback are the flexibility of its various components. It has been used with stepping motors and PZTs as positioners. Various devices such as ion chambers, scintillation counters, photodiodes, and photoelectron collectors have been used as detectors. The software provides significant cost savings over hardware feedback methods. Presently implemented in EPICS, the software is sufficiently general to any automated instrument control system.

  14. S5H/DMR6 Encodes a Salicylic Acid 5-Hydroxylase That Fine-Tunes Salicylic Acid Homeostasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yanjun; Zhao, Li; Zhao, Jiangzhe

    The phytohormone salicylic acid (SA) plays essential roles in biotic and abiotic responses, plant development, and leaf senescence. 2,5-Dihydroxybenzoic acid (2,5-DHBA or gentisic acid) is one of the most commonly occurring aromatic acids in green plants and is assumed to be generated from SA, but the enzymes involved in its production remain obscure. DMR6 (Downy Mildew Resistant 6, At5g24530) has been proven essential in plant immunity of Arabidopsis, but its biochemical properties are not well understood. Here in this paper, we report the discovery and functional characterization of DMR6 as a SA 5-hydroxylase (S5H) that catalyzes the formation of 2,5-DHBAmore » by hydroxylating SA at the C5 position of its phenyl ring in Arabidopsis. S5H/DMR6 specifically converts SA to 2,5-DHBA in vitro and displays higher catalytic efficiency (K cat/K m=4.96×10 4 M -1s -1) than the previously reported SA 3-hydroxylase (S3H, K cat/K m=6.09 × 10 3 M -1s -1) for SA. Interestingly, S5H/DMR6 displays a substrate inhibition property that may enable automatic control of its enzyme activities. The s5h mutant and s5hs3h double mutant over accumulate SA and display phenotypes such as a smaller growth size, early senescence and a loss of susceptibility to Pseudomonas syringae pv. tomato DC3000 (Pst DC3000). S5H/DMR6 is sensitively induced by SA/pathogen treatment and is widely expressed from young seedlings to senescing plants, whereas S3H is more specifically expressed at the mature and senescing stages. Collectively, our results disclose the identity of the enzyme required for 2,5-DHBA formation and reveal a mechanism by which plants fine-tune SA homeostasis by mediating SA 5-hydroxylation.« less

  15. S5H/DMR6 Encodes a Salicylic Acid 5-Hydroxylase That Fine-Tunes Salicylic Acid Homeostasis

    DOE PAGES

    Zhang, Yanjun; Zhao, Li; Zhao, Jiangzhe; ...

    2017-09-12

    The phytohormone salicylic acid (SA) plays essential roles in biotic and abiotic responses, plant development, and leaf senescence. 2,5-Dihydroxybenzoic acid (2,5-DHBA or gentisic acid) is one of the most commonly occurring aromatic acids in green plants and is assumed to be generated from SA, but the enzymes involved in its production remain obscure. DMR6 (Downy Mildew Resistant 6, At5g24530) has been proven essential in plant immunity of Arabidopsis, but its biochemical properties are not well understood. Here in this paper, we report the discovery and functional characterization of DMR6 as a SA 5-hydroxylase (S5H) that catalyzes the formation of 2,5-DHBAmore » by hydroxylating SA at the C5 position of its phenyl ring in Arabidopsis. S5H/DMR6 specifically converts SA to 2,5-DHBA in vitro and displays higher catalytic efficiency (K cat/K m=4.96×10 4 M -1s -1) than the previously reported SA 3-hydroxylase (S3H, K cat/K m=6.09 × 10 3 M -1s -1) for SA. Interestingly, S5H/DMR6 displays a substrate inhibition property that may enable automatic control of its enzyme activities. The s5h mutant and s5hs3h double mutant over accumulate SA and display phenotypes such as a smaller growth size, early senescence and a loss of susceptibility to Pseudomonas syringae pv. tomato DC3000 (Pst DC3000). S5H/DMR6 is sensitively induced by SA/pathogen treatment and is widely expressed from young seedlings to senescing plants, whereas S3H is more specifically expressed at the mature and senescing stages. Collectively, our results disclose the identity of the enzyme required for 2,5-DHBA formation and reveal a mechanism by which plants fine-tune SA homeostasis by mediating SA 5-hydroxylation.« less

  16. To sort or not to sort: the impact of spike-sorting on neural decoding performance.

    PubMed

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  17. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    NASA Astrophysics Data System (ADS)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  18. The Integrated Safety-Critical Advanced Avionics Communication and Control (ISAACC) System Concept: Infrastructure for ISHM

    NASA Technical Reports Server (NTRS)

    Gwaltney, David A.; Briscoe, Jeri M.

    2005-01-01

    Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a manned spacecraft. These systems include spacecraft navigation and attitude control, propulsion, automated docking, vehicle health management and life support. ISAACC can integrate local critical subsystem health management with subsystems performing long term health monitoring. The ISAACC system and its relationship to ISHM will be presented.

  19. 1THz synchronous tuning of two optical synthesizers

    NASA Astrophysics Data System (ADS)

    Neuhaus, Rudolf; Rohde, Felix; Benkler, Erik; Puppe, Thomas; Raab, Christoph; Unterreitmayer, Reinhard; Zach, Armin; Telle, Harald R.; Stuhler, Jürgen

    2016-04-01

    Single-frequency optical synthesizers (SFOS) provide an optical field with arbitrarily adjustable frequency and phase which is phase-coherently linked to a reference signal. Ideally, they combine the spectral resolution of narrow linewidth frequency stabilized lasers with the broad spectral coverage of frequency combs in a tunable fashion. In state-of-the-art SFOSs tuning across comb lines requires comb line order switching,1, 2 which imposes technical overhead with problems like forbidden frequency gaps or strong phase glitches. Conventional tunable lasers often tune over only tens of GHz before mode-hops occur. Here, we present a novel type of SFOSs, which relies on a serrodyne technique with conditional flyback,3 shifting the carrier frequency of the employed frequency comb without an intrusion into the comb generator. It utilizes a new continuously tunable diode laser that tunes mode-hop-free across the full gain spectrum of the integrated laser diode. We investigate the tuning behavior of two identical SFOSs that share a common reference, by comparing the phases of their output signals. Previously, we achieved phase-stable and cycle-slip free frequency tuning over 28.1 GHz with a maximum zero-to-peak phase deviation of 62 mrad4 when sharing a common comb generator. With the new continuously tunable lasers, the SFOSs tune synchronously across nearly 17800 comb lines (1 THz). The tuning range in this approach can be extended to the full bandwidth of the frequency comb and the 110 nm mode-hop-free tuning range of the diode laser.

  20. Quantum Cascade Laser Tuning by Digital Micromirror Array-controlled External Cavity

    DTIC Science & Technology

    2014-01-01

    P. Vujkovic-Cvijin, B. Gregor, A. C. Samuels, E. S. Roese, Quantum cascade laser tuning by digital micromirror array-controlled external cavity...REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Quantum cascade laser tuning by digital micromirror array-controlled...dimensional digital micromirror array (DMA) is described. The laser is tuned by modulating the reflectivity of DMA micromirror pixels under computer

  1. Parametrization of Combined Quantum Mechanical and Molecular Mechanical Methods: Bond-Tuned Link Atoms.

    PubMed

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-05-30

    Combined quantum mechanical and molecular mechanical (QM/MM) methods are the most powerful available methods for high-level treatments of subsystems of very large systems. The treatment of the QM-MM boundary strongly affects the accuracy of QM/MM calculations. For QM/MM calculations having covalent bonds cut by the QM-MM boundary, it has been proposed previously to use a scheme with system-specific tuned fluorine link atoms. Here, we propose a broadly parametrized scheme where the parameters of the tuned F link atoms depend only on the type of bond being cut. In the proposed new scheme, the F link atom is tuned for systems with a certain type of cut bond at the QM-MM boundary instead of for a specific target system, and the resulting link atoms are call bond-tuned link atoms. In principle, the bond-tuned link atoms can be as convenient as the popular H link atoms, and they are especially well adapted for high-throughput and accurate QM/MM calculations. Here, we present the parameters for several kinds of cut bonds along with a set of validation calculations that confirm that the proposed bond-tuned link-atom scheme can be as accurate as the system-specific tuned F link-atom scheme.

  2. Four-part choral synthesis system for investigating intonation in a cappella choral singing.

    PubMed

    Howard, David M; Daffern, Helena; Brereton, Jude

    2013-10-01

    Accurate tuning is an important aspect of singing in harmony in the context of a choir or vocal ensemble. Tuning and 'pitch drift' are concerning factors in performance for even the most accomplished professional choirs when singing a cappella (unaccompanied). In less experienced choirs tuning often lacks precision, typically because individual singers have not developed appropriate listening skills. In order to investigate accuracy of tuning in ensemble singing situations, a chorally appropriate reference is required against which frequency measurements can be made. Since most basic choral singing involves chords in four parts, a four-part reference template is used in which the fundamental frequencies of the notes in each chord can be accurately set. This template can now be used in experiments where three of the reference parts are tuned in any musical temperament (tuning system), in this case equal and just temperaments, and played over headphones to a singer to allow her/his tuning strategy to be investigated. This paper describes a practical implementation of a four-part choral synthesis system in Pure Data (Pd) and its use in an investigation of tuning of notes by individual singers using an exercise originally written to explore pitch drift in a cappella choral singing.

  3. Reducing the fine-tuning of gauge-mediated SUSY breaking

    NASA Astrophysics Data System (ADS)

    Casas, J. Alberto; Moreno, Jesús M.; Robles, Sandra; Rolbiecki, Krzysztof

    2016-08-01

    Despite their appealing features, models with gauge-mediated supersymmetry breaking (GMSB) typically present a high degree of fine-tuning, due to the initial absence of the top trilinear scalar couplings, A_t=0. In this paper, we carefully evaluate such a tuning, showing that is worse than per mil in the minimal model. Then, we examine some existing proposals to generate A_t≠ 0 term in this context. We find that, although the stops can be made lighter, usually the tuning does not improve (it may be even worse), with some exceptions, which involve the generation of A_t at one loop or tree level. We examine both possibilities and propose a conceptually simplified version of the latter; which is arguably the optimum GMSB setup (with minimal matter content), concerning the fine-tuning issue. The resulting fine-tuning is better than one per mil, still severe but similar to other minimal supersymmetric standard model constructions. We also explore the so-called "little A_t^2/m^2 problem", i.e. the fact that a large A_t-term is normally accompanied by a similar or larger sfermion mass, which typically implies an increase in the fine-tuning. Finally, we find the version of GMSB for which this ratio is optimized, which, nevertheless, does not minimize the fine-tuning.

  4. Precision and Fast Wavelength Tuning of a Dynamically Phase-Locked Widely-Tunable Laser

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Chen, Jeffrey R.; Wu, Stewart T.

    2012-01-01

    We report a precision and fast wavelength tuning technique demonstrated for a digital-supermode distributed Bragg reflector laser. The laser was dynamically offset-locked to a frequency-stabilized master laser using an optical phase-locked loop, enabling precision fast tuning to and from any frequencies within a 40-GHz tuning range. The offset frequency noise was suppressed to the statically offset-locked level in less than 40 s upon each frequency switch, allowing the laser to retain the absolute frequency stability of the master laser. This technique satisfies stringent requirements for gas sensing lidars and enables other applications that require such well-controlled precision fast tuning.

  5. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  6. Corticofugal modulation of time-domain processing of biosonar information in bats.

    PubMed

    Yan, J; Suga, N

    1996-08-23

    The Jamaican mustached bat has delay-tuned neurons in the inferior colliculus, medial geniculate body, and auditory cortex. The responses of these neurons to an echo are facilitated by a biosonar pulse emitted by the bat when the echo returns with a particular delay from a target located at a particular distance. Electrical stimulation of cortical delay-tuned neurons increases the delay-tuned responses of collicular neurons tuned to the same echo delay as the cortical neurons and decreases those of collicular neurons tuned to different echo delays. Cortical neurons improve information processing in the inferior colliculus by way of the corticocollicular projection.

  7. Analytical design equations for self-tuned Class-E power amplifier.

    PubMed

    Hu, Zhe; Troyk, Philip

    2011-01-01

    For many emerging neural prosthesis designs that are powered by inductive coupling, their small physical size requires large current in the extracorporeal transmitter coil, and the Class-E power amplifier topology is often used for the transmitter design. Tuning of Class-E circuits for efficient operation is difficult and a self-tuned circuit can facilitate the tuning. The coil current is sensed and used to tune the switching of the transistor switch in the Class-E circuit in order to maintain its high-efficiency operation. Although mathematically complex, the analysis and design procedure for the self-tuned Class-E circuit can be simplified due to the current feedback control, which makes the phase angle between the switching pulse and the coil current predetermined. In this paper explicit analytical design equations are derived and a detailed design procedure is presented and compared with the conventional Class-E design approaches.

  8. Realizing up-conversion fluorescence tuning in lanthanide-doped nanocrystals by femtosecond pulse shaping method

    PubMed Central

    Zhang, Shian; Yao, Yunhua; Shuwu, Xu; Liu, Pei; Ding, Jingxin; Jia, Tianqing; Qiu, Jianrong; Sun, Zhenrong

    2015-01-01

    The ability to tune color output of nanomaterials is very important for their applications in laser, optoelectronic device, color display and multiplexed biolabeling. Here we first propose a femtosecond pulse shaping technique to realize the up-conversion fluorescence tuning in lanthanide-doped nanocrystals dispersed in the glass. The multiple subpulse formation by a square phase modulation can create different excitation pathways for various up-conversion fluorescence generations. By properly controlling these excitation pathways, the multicolor up-conversion fluorescence can be finely tuned. This color tuning by the femtosecond pulse shaping technique is realized in single material by single-color laser field, which is highly desirable for further applications of the lanthanide-doped nanocrystals. This femtosecond pulse shaping technique opens an opportunity to tune the color output in the lanthanide-doped nanocrystals, which may bring a new revolution in the control of luminescence properties of nanomaterials. PMID:26290391

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai,M.; Ptitsyn, V.; Roser, T.

    To keep the spin tune in the spin depolarizing resonance free region is required for accelerating polarized protons to high energy. In RHIC, two snakes are located at the opposite side of each accelerator. They are configured to yield a spin tune of 1/2. Two pairs of spin rotators are located at either side of two detectors in each ring in RHIC to provide longitudinal polarization for the experiments. Since the spin rotation from vertical to longitudinal is localized between the two rotators, the spin rotators do not change the spin tune. However, due to the imperfection of the orbitsmore » around the snakes and rotators, the spin tune can be shifted. This note presents the impact of the horizontal orbital angle between the two snakes on the spin tune, as well as the effect of the vertical orbital angle between two rotators at either side of the collision point on the spin tune.« less

  10. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?

    PubMed

    Tajbakhsh, Nima; Shin, Jae Y; Gurudu, Suryakanth R; Hurst, R Todd; Kendall, Christopher B; Gotway, Michael B; Jianming Liang

    2016-05-01

    Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.

  11. Ankle foot orthosis-footwear combination tuning: an investigation into common clinical practice in the United Kingdom.

    PubMed

    Eddison, Nicola; Chockalingam, Nachiappan; Osborne, Stephen

    2015-04-01

    Ankle foot orthoses are used to treat a wide variety of gait pathologies. Ankle foot orthosis-footwear combination tuning should be routine clinical practice when prescribing an ankle foot orthosis. Current research suggests that failure to tune ankle foot orthosis-footwear combinations can lead to immediate detrimental effect on function, and in the longer term, it may actually contribute to deterioration. The purpose of this preliminary study was to identify the current level of knowledge clinicians have in the United Kingdom regarding ankle foot orthosis-footwear combination tuning and to investigate common clinical practice regarding ankle foot orthosis-footwear combination tuning among UK orthotists. Cross-sectional survey. A prospective study employing a multi-item questionnaire was sent out to registered orthotists and uploaded on to the official website of British Association of Prosthetists and Orthotists to be accessed by their members. A total of 41 completed questionnaires were received. The results demonstrate that only 50% of participants use ankle foot orthosis-footwear combination tuning as standard clinical practice. The most prevalent factors preventing participants from carrying out ankle foot orthosis-footwear combination tuning are a lack of access to three-dimensional gait analysis equipment (37%) and a lack of time available in their clinics (27%). Although, ankle foot orthosis-footwear combination tuning has been identified as an essential aspect of the prescription of ankle foot orthoses, the results of this study show a lack of understanding of the key principles behind ankle foot orthosis-footwear combination tuning. © The International Society for Prosthetics and Orthotics 2014.

  12. Reducing variable frequency vibrations in a powertrain system with an adaptive tuned vibration absorber group

    NASA Astrophysics Data System (ADS)

    Gao, Pu; Xiang, Changle; Liu, Hui; Zhou, Han

    2018-07-01

    Based on a multiple degrees of freedom dynamic model of a vehicle powertrain system, natural vibration analyses and sensitivity analyses of the eigenvalues are performed to determine the key inertia for each natural vibration of a powertrain system. Then, the results are used to optimize the installation position of each adaptive tuned vibration absorber. According to the relationship between the variable frequency torque excitation and the natural vibration of a powertrain system, the entire vibration frequency band is divided into segments, and the auxiliary vibration absorber and dominant vibration absorber are determined for each sensitive frequency band. The optimum parameters of the auxiliary vibration absorber are calculated based on the optimal frequency ratio and the optimal damping ratio of the passive vibration absorber. The instantaneous change state of the natural vibrations of a powertrain system with adaptive tuned vibration absorbers is studied, and the optimized start and stop tuning frequencies of the adaptive tuned vibration absorber are obtained. These frequencies can be translated into the optimum parameters of the dominant vibration absorber. Finally, the optimal tuning scheme for the adaptive tuned vibration absorber group, which can be used to reduce the variable frequency vibrations of a powertrain system, is proposed, and corresponding numerical simulations are performed. The simulation time history signals are transformed into three-dimensional information related to time, frequency and vibration energy via the Hilbert-Huang transform (HHT). A comprehensive time-frequency analysis is then conducted to verify that the optimal tuning scheme for the adaptive tuned vibration absorber group can significantly reduce the variable frequency vibrations of a powertrain system.

  13. A Caveat Note on Tuning in the Development of Coupled Climate Models

    NASA Astrophysics Data System (ADS)

    Dommenget, Dietmar; Rezny, Michael

    2018-01-01

    State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.

  14. MODEST: a web-based design tool for oligonucleotide-mediated genome engineering and recombineering

    PubMed Central

    Bonde, Mads T.; Klausen, Michael S.; Anderson, Mads V.; Wallin, Annika I.N.; Wang, Harris H.; Sommer, Morten O.A.

    2014-01-01

    Recombineering and multiplex automated genome engineering (MAGE) offer the possibility to rapidly modify multiple genomic or plasmid sites at high efficiencies. This enables efficient creation of genetic variants including both single mutants with specifically targeted modifications as well as combinatorial cell libraries. Manual design of oligonucleotides for these approaches can be tedious, time-consuming, and may not be practical for larger projects targeting many genomic sites. At present, the change from a desired phenotype (e.g. altered expression of a specific protein) to a designed MAGE oligo, which confers the corresponding genetic change, is performed manually. To address these challenges, we have developed the MAGE Oligo Design Tool (MODEST). This web-based tool allows designing of MAGE oligos for (i) tuning translation rates by modifying the ribosomal binding site, (ii) generating translational gene knockouts and (iii) introducing other coding or non-coding mutations, including amino acid substitutions, insertions, deletions and point mutations. The tool automatically designs oligos based on desired genotypic or phenotypic changes defined by the user, which can be used for high efficiency recombineering and MAGE. MODEST is available for free and is open to all users at http://modest.biosustain.dtu.dk. PMID:24838561

  15. AC propulsion system for an electric vehicle, phase 2

    NASA Astrophysics Data System (ADS)

    Slicker, J. M.

    1983-06-01

    A second-generation prototype ac propulsion system for a passenger electric vehicle was designed, fabricated, tested, installed in a modified Mercury Lynx vehicle and track tested at the Contractor's site. The system consisted of a Phase 2, 18.7 kw rated ac induction traction motor, a 192-volt, battery powered, pulse-width-modulated, transistorized inverter packaged for under rear seat installation, a 2-axis, 2-speed, automatically-shifted mechanical transaxle and a microprocessor-based powertrain/vehicle controller. A diagnostics computer to assist tuning and fault finding was fabricated. Dc-to-mechanical-system efficiency varied from 78% to 82% as axle speed/torque ranged from 159 rpm/788 nm to 65 rpm/328 nm. Track test efficiency results suggest that the ac system will be equal or superior to dc systems when driving urban cycles. Additional short-term work is being performed under a third contract phase (AC-3) to raise transaxle efficiency to predicted levels, and to improve starting and shifting characteristics. However, the long-term challenge to the system's viability remains inverter cost. A final report on the Phase 2 system, describing Phase 3 modifications, will be issued at the conclusion of AC-3.

  16. Acceleration of the highest energy cosmic rays through proton-neutron conversions in relativistic bulk flows

    NASA Astrophysics Data System (ADS)

    Derishev, E.; Aharonian, F.

    We show that, in the presence of radiation field, relativistic bulk flows can very quikly accelerate protons and electrons up to the energies limited either by Hillas criterion or by synchrotron losses. Unlike the traditional approach, we take advantage of continuous photon-induced conversion of charged particle species to neutral ones, and vice versa (proton-neutron or electron-photon). Such a conversion, though it leads to considerable energy losses, allows accelerated particles to increase their energies in each scattering by a factor roughly equal to the bulk Lorentz factor, thus avoiding the need in slow and relatively inefficient diffusive acceleration. The optical depth of accelerating region with respect to inelastic photon-induced reactions (pair production for electrons and photomeson reactions for protons) should be a substancial fraction of unity. Remarkably, self-tuning of the optical depth is automatically achieved as long as the photon density depends on the distance along the bulk flow. This mechanism can work in Gamma-Ray Bursts (GRBs), Active Galactic Nuclei (AGNs), microquasars, or any other object with relativistic bulk flows embedded in radiation-reach environment. Both GRBs and AGNs turn out to be capable of producing 1020 eV cosmic rays.

  17. A cable-driven wrist robotic rehabilitator using a novel torque-field controller for human motion training.

    PubMed

    Chen, Weihai; Cui, Xiang; Zhang, Jianbin; Wang, Jianhua

    2015-06-01

    Rehabilitation technologies have great potentials in assisted motion training for stroke patients. Considering that wrist motion plays an important role in arm dexterous manipulation of activities of daily living, this paper focuses on developing a cable-driven wrist robotic rehabilitator (CDWRR) for motion training or assistance to subjects with motor disabilities. The CDWRR utilizes the wrist skeletal joints and arm segments as the supporting structure and takes advantage of cable-driven parallel design to build the system, which brings the properties of flexibility, low-cost, and low-weight. The controller of the CDWRR is designed typically based on a virtual torque-field, which is to plan "assist-as-needed" torques for the spherical motion of wrist responding to the orientation deviation in wrist motion training. The torque-field controller can be customized to different levels of rehabilitation training requirements by tuning the field parameters. Additionally, a rapidly convergent parameter self-identification algorithm is developed to obtain the uncertain parameters automatically for the floating wearable structure of the CDWRR. Finally, experiments on a healthy subject are carried out to demonstrate the performance of the controller and the feasibility of the CDWRR on wrist motion training or assistance.

  18. Hand placement near the visual stimulus improves orientation selectivity in V2 neurons

    PubMed Central

    Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar

    2015-01-01

    Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165

  19. An advanced robust method for speed control of switched reluctance motor

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Ming, Zhengfeng; Su, Zhanping; Cai, Zhuang

    2018-05-01

    This paper presents an advanced robust controller for the speed system of a switched reluctance motor (SRM) in the presence of nonlinearities, speed ripple, and external disturbances. It proposes that the adaptive fuzzy control is applied to regulate the motor speed in the outer loop, and the detector is used to obtain rotor detection in the inner loop. The new fuzzy logic tuning rules are achieved from the experience of the operator and the knowledge of the specialist. The fuzzy parameters are automatically adjusted online according to the error and its change of speed in the transient period. The designed detector can obtain the rotor's position accurately in each phase module. Furthermore, a series of contrastive simulations are completed between the proposed controller and proportion integration differentiation controller including low speed, medium speed, and high speed. Simulations show that the proposed robust controller enables the system reduced by at least 3% in overshoot, 6% in rise time, and 20% in setting time, respectively, and especially under external disturbances. Moreover, an actual SRM control system is constructed at 220 V 370 W. The experiment results further prove that the proposed robust controller has excellent dynamic performance and strong robustness.

  20. Comprehensive comparative analysis and identification of RNA-binding protein domains: multi-class classification and feature selection.

    PubMed

    Jahandideh, Samad; Srinivasasainagendra, Vinodh; Zhi, Degui

    2012-11-07

    RNA-protein interaction plays an important role in various cellular processes, such as protein synthesis, gene regulation, post-transcriptional gene regulation, alternative splicing, and infections by RNA viruses. In this study, using Gene Ontology Annotated (GOA) and Structural Classification of Proteins (SCOP) databases an automatic procedure was designed to capture structurally solved RNA-binding protein domains in different subclasses. Subsequently, we applied tuned multi-class SVM (TMCSVM), Random Forest (RF), and multi-class ℓ1/ℓq-regularized logistic regression (MCRLR) for analysis and classifying RNA-binding protein domains based on a comprehensive set of sequence and structural features. In this study, we compared prediction accuracy of three different state-of-the-art predictor methods. From our results, TMCSVM outperforms the other methods and suggests the potential of TMCSVM as a useful tool for facilitating the multi-class prediction of RNA-binding protein domains. On the other hand, MCRLR by elucidating importance of features for their contribution in predictive accuracy of RNA-binding protein domains subclasses, helps us to provide some biological insights into the roles of sequences and structures in protein-RNA interactions.

  1. Unsupervised learning of structure in spectroscopic cubes

    NASA Astrophysics Data System (ADS)

    Araya, M.; Mendoza, M.; Solar, M.; Mardones, D.; Bayo, A.

    2018-07-01

    We consider the problem of analyzing the structure of spectroscopic cubes using unsupervised machine learning techniques. We propose representing the target's signal as a homogeneous set of volumes through an iterative algorithm that separates the structured emission from the background while not overestimating the flux. Besides verifying some basic theoretical properties, the algorithm is designed to be tuned by domain experts, because its parameters have meaningful values in the astronomical context. Nevertheless, we propose a heuristic to automatically estimate the signal-to-noise ratio parameter of the algorithm directly from data. The resulting light-weighted set of samples (≤ 1% compared to the original data) offer several advantages. For instance, it is statistically correct and computationally inexpensive to apply well-established techniques of the pattern recognition and machine learning domains; such as clustering and dimensionality reduction algorithms. We use ALMA science verification data to validate our method, and present examples of the operations that can be performed by using the proposed representation. Even though this approach is focused on providing faster and better analysis tools for the end-user astronomer, it also opens the possibility of content-aware data discovery by applying our algorithm to big data.

  2. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  3. AC propulsion system for an electric vehicle, phase 2

    NASA Technical Reports Server (NTRS)

    Slicker, J. M.

    1983-01-01

    A second-generation prototype ac propulsion system for a passenger electric vehicle was designed, fabricated, tested, installed in a modified Mercury Lynx vehicle and track tested at the Contractor's site. The system consisted of a Phase 2, 18.7 kw rated ac induction traction motor, a 192-volt, battery powered, pulse-width-modulated, transistorized inverter packaged for under rear seat installation, a 2-axis, 2-speed, automatically-shifted mechanical transaxle and a microprocessor-based powertrain/vehicle controller. A diagnostics computer to assist tuning and fault finding was fabricated. Dc-to-mechanical-system efficiency varied from 78% to 82% as axle speed/torque ranged from 159 rpm/788 nm to 65 rpm/328 nm. Track test efficiency results suggest that the ac system will be equal or superior to dc systems when driving urban cycles. Additional short-term work is being performed under a third contract phase (AC-3) to raise transaxle efficiency to predicted levels, and to improve starting and shifting characteristics. However, the long-term challenge to the system's viability remains inverter cost. A final report on the Phase 2 system, describing Phase 3 modifications, will be issued at the conclusion of AC-3.

  4. Water use and time analysis in ablution from taps

    NASA Astrophysics Data System (ADS)

    Zaied, Roubi A.

    2017-09-01

    There is a lack of water resources and an extreme use of potable water in our Arab region. Ablution from taps was studied since it is a repeated daily activity that consumes more water. Five different tap types are investigated for water consumption fashions including traditional mixing tap and automatic tap. Analyzing 100 experimental observations revealed that 22.7-28.8 % of ablution water is used for washing of feet and the largest water waste occurs during washing of face portions. Moreover, 30-47 % amount of water consumed in ablution from taps is wasted which can be saved if tap releases water only at moments of need. The push-type tap is being spread recently especially in airports. If it is intended for use in ablution facilities, batch duration and volume must be tuned. When each batch is 0.25 L of water and lasts for 3 s, 3 L are sufficient for one complete ablution in average which means considerable saving. A cost-benefit model is proposed for using different tap types and an economic feasibility study is performed on a case study. This analysis can help us to design better ablution systems.

  5. Trajectory Based Behavior Analysis for User Verification

    NASA Astrophysics Data System (ADS)

    Pao, Hsing-Kuo; Lin, Hong-Yi; Chen, Kuan-Ta; Fadlil, Junaidillah

    Many of our activities on computer need a verification step for authorized access. The goal of verification is to tell apart the true account owner from intruders. We propose a general approach for user verification based on user trajectory inputs. The approach is labor-free for users and is likely to avoid the possible copy or simulation from other non-authorized users or even automatic programs like bots. Our study focuses on finding the hidden patterns embedded in the trajectories produced by account users. We employ a Markov chain model with Gaussian distribution in its transitions to describe the behavior in the trajectory. To distinguish between two trajectories, we propose a novel dissimilarity measure combined with a manifold learnt tuning for catching the pairwise relationship. Based on the pairwise relationship, we plug-in any effective classification or clustering methods for the detection of unauthorized access. The method can also be applied for the task of recognition, predicting the trajectory type without pre-defined identity. Given a trajectory input, the results show that the proposed method can accurately verify the user identity, or suggest whom owns the trajectory if the input identity is not provided.

  6. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  7. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  8. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  9. Event-Based control of depth of hypnosis in anesthesia.

    PubMed

    Merigo, Luca; Beschi, Manuel; Padula, Fabrizio; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio

    2017-08-01

    In this paper, we propose the use of an event-based control strategy for the closed-loop control of the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. A new event generator with high noise-filtering properties is employed in addition to a PIDPlus controller. The tuning of the parameters is performed off-line by using genetic algorithms by considering a given data set of patients. The effectiveness and robustness of the method is verified in simulation by implementing a Monte Carlo method to address the intra-patient and inter-patient variability. A comparison with a standard PID control structure shows that the event-based control system achieves a reduction of the total variation of the manipulated variable of 93% in the induction phase and of 95% in the maintenance phase. The use of event based automatic control in anesthesia yields a fast induction phase with bounded overshoot and an acceptable disturbance rejection. A comparison with a standard PID control structure shows that the technique effectively mimics the behavior of the anesthesiologist by providing a significant decrement of the total variation of the manipulated variable. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Novel micro-Raman setup with tunable laser excitation for time-efficient resonance Raman microscopy and imaging.

    PubMed

    Stürzl, Ninette; Lebedkin, Sergei; Klumpp, Stefanie; Hennrich, Frank; Kappes, Manfred M

    2013-05-07

    We describe a micro-Raman setup allowing for efficient resonance Raman spectroscopy (RRS), i.e., mapping of Raman spectra as a function of tunable laser excitation wavelength. The instrument employs angle-tunable bandpass optical filters which are integrated into software-controlled Raman and laser cleanup filter devices. These automatically follow the excitation laser wavelength and combine tunability with high bandpass transmission as well as high off-band blocking of light. Whereas the spectral intervals which can be simultaneously acquired are bandpass limited to ~350 cm(-1), they can be tuned across the spectrum of interest to access all characteristic Raman features. As an illustration of performance, we present Raman mapping of single-walled carbon nanotubes (SWNTs): (i) in a small volume of water-surfactant dispersion as well as (ii) after deposition onto a substrate. A significant improvement in the acquisition time (and efficiency) is demonstrated compared to previous RRS implementations. These results may help to establish (micro) Raman spectral mapping as a routine tool for characterization of SWNTs as well as other materials with a pronounced resonance Raman response in the visible-near-infrared spectral region.

  11. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    PubMed Central

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  12. Automation and Upgrade of Thermal System for Large 38-Year-Young Test Facility

    NASA Technical Reports Server (NTRS)

    Webb, Andrew T.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The Goddard Space Flight Center's Space Environment Simulator (SES) facility has been improved by the upgrade of its thermal control hardware and software. This paper describes the preliminary design process, funding constraints, and the proposed enhancements as well as the installation details, the testing difficulties, and the overall benefits realized from this upgrade. The preliminary design process was discussed in a paper presented in October 1996 and will be recapped in this paper to provide background and comparison to actual product. Structuring the procurement process to match the funding constraints allowed Goddard to enhance its capabilities in an environment of reduced budgets. The installation of the new system into a location that has been occupied for over 38 years was one of the driving design factors for the size of the equipment. The installation was completed on time and under budget. The tuning of the automatic sequences for the new thermal system to the existing shroud system required more time and ultimately presented some setbacks to the vendor and the final completion of the system. However, the end product and its benefits to Goddard's thermal vacuum test portfolio will carry the usefulness of this facility well into the next century.

  13. Automation and Upgrade of Thermal System for Large 38-Year Young Test Facility

    NASA Technical Reports Server (NTRS)

    Webb, Andrew

    2000-01-01

    The Goddard Space Flight Center's Space Environment Simulator (SES) facility has been improved by the upgrade of its thermal control hardware and software. This paper describes the preliminary design process, funding constraints, and the proposed enhancements as well as the installation details, the testing difficulties, and the overall benefits realized from this upgrade. The preliminary design process was discussed in a paper presented in October 1996 and will be recapped in this paper to provide background and comparison to actual product. Structuring the procurement process to match the funding constraints allowed Goddard to enhance its capabilities in an environment of reduced budgets. The installation of the new system into a location that has been occupied for over 38-years was one of the driving design factors for the size of the equipment. The installation was completed on-time and under budget. The tuning of the automatic sequences for the new thermal system to the existing shroud system required more time and ultimately presented some setbacks to the vendor and the final completion of the system. However, the end product and its benefits to Goddard's thermal vacuum test portfolio will carry the usefulness of this facility well into the next century.

  14. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  15. A system for de-identifying medical message board text.

    PubMed

    Benton, Adrian; Hill, Shawndra; Ungar, Lyle; Chung, Annie; Leonard, Charles; Freeman, Cristin; Holmes, John H

    2011-06-09

    There are millions of public posts to medical message boards by users seeking support and information on a wide range of medical conditions. It has been shown that these posts can be used to gain a greater understanding of patients' experiences and concerns. As investigators continue to explore large corpora of medical discussion board data for research purposes, protecting the privacy of the members of these online communities becomes an important challenge that needs to be met. Extant entity recognition methods used for more structured text are not sufficient because message posts present additional challenges: the posts contain many typographical errors, larger variety of possible names, terms and abbreviations specific to Internet posts or a particular message board, and mentions of the authors' personal lives. The main contribution of this paper is a system to de-identify the authors of message board posts automatically, taking into account the aforementioned challenges. We demonstrate our system on two different message board corpora, one on breast cancer and another on arthritis. We show that our approach significantly outperforms other publicly available named entity recognition and de-identification systems, which have been tuned for more structured text like operative reports, pathology reports, discharge summaries, or newswire.

  16. Local Circuit Inhibition in the Cerebral Cortex as the source of Gain Control and Untuned Suppression

    PubMed Central

    Shapley, Robert M.; Xing, Dajun

    2012-01-01

    Theoretical considerations have led to the concept that the cerebral cortex is operating in a balanced state in which synaptic excitation is approximately balanced by synaptic inhibition from the local cortical circuit. This paper is about the functional consequences of the balanced state in sensory cortex. One consequence is gain control: there is experimental evidence and theoretical support for the idea that local circuit inhibition acts as a local automatic gain control throughout the cortex. Second, inhibition increases cortical feature selectivity: many studies of different sensory cortical areas have reported that suppressive mechanisms contribute to feature selectivity. Synaptic inhibition from the local microcircuit should be untuned (or broadly tuned) for stimulus features because of the microarchitecture of the cortical microcircuit. Untuned inhibition probably is the source of Untuned Suppression that enhances feature selectivity. We studied inhibition’s function in our experiments, guided by a neuronal network model, on orientation selectivity in the primary visual cortex, V1, of the Macaque monkey. Our results revealed that Untuned Suppression, generated by local circuit inhibition, is crucial for the generation of highly orientation-selective cells in V1 cortex. PMID:23036513

  17. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    PubMed

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  18. Optically stabilized Erbium fiber frequency comb with hybrid mode-locking and a broad tunable range of repetition rate.

    PubMed

    Yang, Honglei; Wu, Xuejian; Zhang, Hongyuan; Zhao, Shijie; Yang, Lijun; Wei, Haoyun; Li, Yan

    2016-12-01

    We present an optically stabilized Erbium fiber frequency comb with a broad repetition rate tuning range based on a hybrid mode-locked oscillator. We lock two comb modes to narrow-linewidth reference lasers in turn to investigate the best performance of control loops. The control bandwidth of fast and slow piezoelectric transducers reaches 70 kHz, while that of pump current modulation with phase-lead compensation is extended to 32 kHz, exceeding laser intrinsic response. Eventually, simultaneous lock of both loops is realized to totally phase-stabilize the comb, which will facilitate precision dual-comb spectroscopy, laser ranging, and timing distribution. In addition, a 1.8-MHz span of the repetition rate is achieved by an automatic optical delay line that is helpful in manufacturing a secondary comb with a similar repetition rate. The oscillator is housed in a homemade temperature-controlled box with an accuracy of ±0.02  K, which not only keeps high signal-to-noise ratio of the beat notes with reference lasers, but also guarantees self-starting at the same mode-locking every time.

  19. In-flight results of adaptive attitude control law for a microsatellite

    NASA Astrophysics Data System (ADS)

    Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.

    2015-06-01

    Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.

  20. A Microwave Flow Detector for Gradient Elution Liquid Chromatography.

    PubMed

    Ye, Duye; Wang, Weizheng; Moline, David; Islam, Md Saiful; Chen, Feng; Wang, Pingshan

    2017-10-17

    This study presents a microwave flow detector technique for liquid chromatography (LC) application. The detector is based on a tunable microwave interferometer (MIM) with a vector network analyzer (VNA) for signal measurement and a computer for system control. A microstrip-line-based 0.3 μL flow cell is built and incorporated into the MIM. With syringe pump injection, the detector is evaluated by measuring a few common chemicals in DI water at multiple frequencies from 0.98 to 7.09 GHz. Less than 30 ng minimum detectable quantity (MDQ) is demonstrated. An algorithm is provided and used to obtain sample dielectric permittivity at each frequency point. When connected to a commercial HPLC system and injected with a 10 μL aliquot of 10 000 ppm caffeine DI-water solution, the microwave detector yields a signal-to-noise ratio (SNR) up to 10 under isocratic and gradient elution operations. The maximum sampling rate is 20 Hz. The measurements show that MIM tuning, aided by a digital tunable attenuator (DTA), can automatically adjust MIM operation to retain detector sensitivity when mobile phase changes. Furthermore, the detector demonstrates a capability to quantify coeluted vitamin E succinate (VES) and vitamin D 3 (VD 3 ).

  1. A novel technique for optimal integration of active steering and differential braking with estimation to improve vehicle directional stability.

    PubMed

    Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra

    2018-06-11

    This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Optimal updating magnitude in adaptive flat-distribution sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery

    2017-11-01

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  3. Knowledge engineering for PACES, the particle accelerator control expert system

    NASA Astrophysics Data System (ADS)

    Lind, P. C.; Poehlman, W. F. S.; Stark, J. W.; Cousins, T.

    1992-04-01

    The KN-3000 used at Defense Research Establishment Ottawa is a Van de Graaff particle accelerator employed primarily to produce monoenergetic neutrons for calibrating radiation detectors. To provide training and assistance for new operators, it was decided to develop an expert system for accelerator operation. Knowledge engineering aspects of the expert system are reviewed. Two important issues are involved: the need to encapsulate expert knowledge into the system in a form that facilitates automatic accelerator operation and to partition the system so that time-consuming inferencing is minimized in favor of faster, more algorithmic control. It is seen that accelerator control will require fast, narrowminded decision making for rapid fine tuning, but slower and broader reasoning for machine startup, shutdown, fault diagnosis, and correction. It is also important to render the knowledge base in a form conducive to operator training. A promising form of the expert system involves a hybrid system in which high level reasoning is performed on the host machine that interacts with the user, while an embedded controller employs neural networks for fast but limited adjustment of accelerator performance. This partitioning of duty facilitates a hierarchical chain of command yielding an effective mixture of speed and reasoning ability.

  4. A cable-driven wrist robotic rehabilitator using a novel torque-field controller for human motion training

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Cui, Xiang; Zhang, Jianbin; Wang, Jianhua

    2015-06-01

    Rehabilitation technologies have great potentials in assisted motion training for stroke patients. Considering that wrist motion plays an important role in arm dexterous manipulation of activities of daily living, this paper focuses on developing a cable-driven wrist robotic rehabilitator (CDWRR) for motion training or assistance to subjects with motor disabilities. The CDWRR utilizes the wrist skeletal joints and arm segments as the supporting structure and takes advantage of cable-driven parallel design to build the system, which brings the properties of flexibility, low-cost, and low-weight. The controller of the CDWRR is designed typically based on a virtual torque-field, which is to plan "assist-as-needed" torques for the spherical motion of wrist responding to the orientation deviation in wrist motion training. The torque-field controller can be customized to different levels of rehabilitation training requirements by tuning the field parameters. Additionally, a rapidly convergent parameter self-identification algorithm is developed to obtain the uncertain parameters automatically for the floating wearable structure of the CDWRR. Finally, experiments on a healthy subject are carried out to demonstrate the performance of the controller and the feasibility of the CDWRR on wrist motion training or assistance.

  5. Reduction of capsule endoscopy reading times by unsupervised image mining.

    PubMed

    Iakovidis, D K; Tsevas, S; Polydorou, A

    2010-09-01

    The screening of the small intestine has become painless and easy with wireless capsule endoscopy (WCE) that is a revolutionary, relatively non-invasive imaging technique performed by a wireless swallowable endoscopic capsule transmitting thousands of video frames per examination. The average time required for the visual inspection of a full 8-h WCE video ranges from 45 to 120min, depending on the experience of the examiner. In this paper, we propose a novel approach to WCE reading time reduction by unsupervised mining of video frames. The proposed methodology is based on a data reduction algorithm which is applied according to a novel scheme for the extraction of representative video frames from a full length WCE video. It can be used either as a video summarization or as a video bookmarking tool, providing the comparative advantage of being general, unbounded by the finiteness of a training set. The number of frames extracted is controlled by a parameter that can be tuned automatically. Comprehensive experiments on real WCE videos indicate that a significant reduction in the reading times is feasible. In the case of the WCE videos used this reduction reached 85% without any loss of abnormalities.

  6. Optimal updating magnitude in adaptive flat-distribution sampling.

    PubMed

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  7. Precision and fast wavelength tuning of a dynamically phase-locked widely-tunable laser.

    PubMed

    Numata, Kenji; Chen, Jeffrey R; Wu, Stewart T

    2012-06-18

    We report a precision and fast wavelength tuning technique demonstrated for a digital-supermode distributed Bragg reflector laser. The laser was dynamically offset-locked to a frequency-stabilized master laser using an optical phase-locked loop, enabling precision fast tuning to and from any frequencies within a ~40-GHz tuning range. The offset frequency noise was suppressed to the statically offset-locked level in less than ~40 μs upon each frequency switch, allowing the laser to retain the absolute frequency stability of the master laser. This technique satisfies stringent requirements for gas sensing lidars and enables other applications that require such well-controlled precision fast tuning.

  8. Fast and wide tuning wavelength-swept source based on dispersion-tuned fiber optical parametric oscillator.

    PubMed

    Zhou, Yue; Cheung, Kim K Y; Li, Qin; Yang, Sigang; Chui, P C; Wong, Kenneth K Y

    2010-07-15

    We demonstrate a dispersion-tuned fiber optical parametric oscillator (FOPO)-based swept source with a sweep rate of 40 kHz and a wavelength tuning range of 109 nm around 1550 nm. The cumulative speed exceeds 4,000,000 nm/s. The FOPO is pumped by a sinusoidally modulated pump, which is driven by a clock sweeping linearly from 1 to 1.0006 GHz. A spool of dispersion-compensating fiber is added inside the cavity to perform dispersion tuning. The instantaneous linewidth is 0.8 nm without the use of any wavelength selective element inside the cavity. 1 GHz pulses with pulse width of 150 ps are generated.

  9. Learning and Tuning of Fuzzy Rules

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1997-01-01

    In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.

  10. Tune variations in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Aquilina, N.; Giovannozzi, M.; Lamont, M.; Sammut, N.; Steinhagen, R.; Todesco, E.; Wenninger, J.

    2015-04-01

    The horizontal and vertical betatron tunes of the Large Hadron Collider (LHC) mainly depend on the strength of the quadrupole magnets, but are also affected by the quadrupole component in the main dipoles. In case of systematic misalignments, the sextupole component from the main dipoles and sextupole corrector magnets also affect the tunes due to the feed down effect. During the first years of operation of the LHC, the tunes have been routinely measured and corrected through either a feedback or a feed forward system. In this paper, the evolution of the tunes during injection, ramp and flat top are reconstructed from the beam measurements and the settings of the tune feedback loop and of the feed forward corrections. This gives the obtained precision of the magnetic model of the machine with respect to quadrupole and sextupole components. Measurements at the injection plateau show an unexpected large decay whose origin is not understood. This data is discussed together with the time constants and the dependence on previous cycles. We present results of dedicated experiments that show that this effect does not originate from the decay of the main dipole component. During the ramp, the tunes drift by about 0.022. It is shown that this is related to the precision of tracking the quadrupole field in the machine and this effect is reduced to about 0.01 tune units during flat top.

  11. Tunable Micro- and Nanomechanical Resonators

    PubMed Central

    Zhang, Wen-Ming; Hu, Kai-Ming; Peng, Zhi-Ke; Meng, Guang

    2015-01-01

    Advances in micro- and nanofabrication technologies have enabled the development of novel micro- and nanomechanical resonators which have attracted significant attention due to their fascinating physical properties and growing potential applications. In this review, we have presented a brief overview of the resonance behavior and frequency tuning principles by varying either the mass or the stiffness of resonators. The progress in micro- and nanomechanical resonators using the tuning electrode, tuning fork, and suspended channel structures and made of graphene have been reviewed. We have also highlighted some major influencing factors such as large-amplitude effect, surface effect and fluid effect on the performances of resonators. More specifically, we have addressed the effects of axial stress/strain, residual surface stress and adsorption-induced surface stress on the sensing and detection applications and discussed the current challenges. We have significantly focused on the active and passive frequency tuning methods and techniques for micro- and nanomechanical resonator applications. On one hand, we have comprehensively evaluated the advantages and disadvantages of each strategy, including active methods such as electrothermal, electrostatic, piezoelectrical, dielectric, magnetomotive, photothermal, mode-coupling as well as tension-based tuning mechanisms, and passive techniques such as post-fabrication and post-packaging tuning processes. On the other hand, the tuning capability and challenges to integrate reliable and customizable frequency tuning methods have been addressed. We have additionally concluded with a discussion of important future directions for further tunable micro- and nanomechanical resonators. PMID:26501294

  12. Quasi-continuous frequency tunable terahertz quantum cascade lasers with coupled cavity and integrated photonic lattice.

    PubMed

    Kundu, Iman; Dean, Paul; Valavanis, Alexander; Chen, Li; Li, Lianhe; Cunningham, John E; Linfield, Edmund H; Davies, A Giles

    2017-01-09

    We demonstrate quasi-continuous tuning of the emission frequency from coupled cavity terahertz frequency quantum cascade lasers. Such coupled cavity lasers comprise a lasing cavity and a tuning cavity which are optically coupled through a narrow air slit and are operated above and below the lasing threshold current, respectively. The emission frequency of these devices is determined by the Vernier resonance of longitudinal modes in the lasing and the tuning cavities, and can be tuned by applying an index perturbation in the tuning cavity. The spectral coverage of the coupled cavity devices have been increased by reducing the repetition frequency of the Vernier resonance and increasing the ratio of the free spectral ranges of the two cavities. A continuous tuning of the coupled cavity modes has been realized through an index perturbation of the lasing cavity itself by using wide electrical heating pulses at the tuning cavity and exploiting thermal conduction through the monolithic substrate. Single mode emission and discrete frequency tuning over a bandwidth of 100 GHz and a quasi-continuous frequency coverage of 7 GHz at 2.25 THz is demonstrated. An improvement in the side mode suppression and a continuous spectral coverage of 3 GHz is achieved without any degradation of output power by integrating a π-phase shifted photonic lattice in the laser cavity.

  13. Extensive excitatory network interactions shape temporal processing of communication signals in a model sensory system.

    PubMed

    Ma, Xiaofeng; Kohashi, Tsunehiko; Carlson, Bruce A

    2013-07-01

    Many sensory brain regions are characterized by extensive local network interactions. However, we know relatively little about the contribution of this microcircuitry to sensory coding. Detailed analyses of neuronal microcircuitry are usually performed in vitro, whereas sensory processing is typically studied by recording from individual neurons in vivo. The electrosensory pathway of mormyrid fish provides a unique opportunity to link in vitro studies of synaptic physiology with in vivo studies of sensory processing. These fish communicate by actively varying the intervals between pulses of electricity. Within the midbrain posterior exterolateral nucleus (ELp), the temporal filtering of afferent spike trains establishes interval tuning by single neurons. We characterized pairwise neuronal connectivity among ELp neurons with dual whole cell recording in an in vitro whole brain preparation. We found a densely connected network in which single neurons influenced the responses of other neurons throughout the network. Similarly tuned neurons were more likely to share an excitatory synaptic connection than differently tuned neurons, and synaptic connections between similarly tuned neurons were stronger than connections between differently tuned neurons. We propose a general model for excitatory network interactions in which strong excitatory connections both reinforce and adjust tuning and weak excitatory connections make smaller modifications to tuning. The diversity of interval tuning observed among this population of neurons can be explained, in part, by each individual neuron receiving a different complement of local excitatory inputs.

  14. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.

  15. Inactivation of the infragranular striate cortex broadens orientation tuning of supragranular visual neurons in the cat.

    PubMed

    Allison, J D; Bonds, A B

    1994-01-01

    Intracortical inhibition is believed to enhance the orientation tuning of striate cortical neurons, but the origin of this inhibition is unclear. To examine the possible influence of ascending inhibitory projections from the infragranular layers of striate cortex on the orientation selectivity of neurons in the supragranular layers, we measured the spatiotemporal response properties of 32 supragranular neurons in the cat before, during, and after neural activity in the infragranular layers beneath the recorded cells was inactivated by iontophoretic administration of GABA. During GABA iontophoresis, the orientation tuning bandwidth of 15 (46.9%) supragranular neurons broadened as a result of increases in response amplitude to stimuli oriented about +/- 20 degrees away from the preferred stimulus angle. The mean (+/- SD) baseline orientation tuning bandwidth (half width at half height) of these neurons was 13.08 +/- 2.3 degrees. Their mean tuning bandwidth during inactivation of the infragranular layers increased to 19.59 +/- 2.54 degrees, an increase of 49.7%. The mean percentage increase in orientation tuning bandwidth of the individual neurons was 47.4%. Four neurons exhibited symmetrical changes in their orientation tuning functions, while 11 neurons displayed asymmetrical changes. The change in form of the orientation tuning functions appeared to depend on the relative vertical alignment of the recorded neuron and the infragranular region of inactivation. Neurons located in close vertical register with the inactivated infragranular tissue exhibited symmetric changes in their orientation tuning functions. The neurons exhibiting asymmetric changes in their orientation tuning functions were located just outside the vertical register. Eight of these 11 neurons also demonstrated a mean shift of 6.67 +/- 5.77 degrees in their preferred stimulus orientation. The magnitude of change in the orientation tuning functions increased as the delivery of GABA was prolonged. Responses returned to normal approximately 30 min after the delivery of GABA was discontinued. We conclude that inhibitory projections from neurons within the infragranular layers of striate cortex in cats can enhance the orientation selectivity of supragranular striate cortical neurons.

  16. Radio tuning effects on visual and driving performance measures : simulator and test track studies.

    DOT National Transportation Integrated Search

    2013-05-01

    Existing driver distraction guidelines for visual-manual device interface operation specify traditional : manual radio tuning as a reference task. This project evaluated the radio tuning reference task through two activities. : The first activity con...

  17. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Economy of scale: a motion sensor with variable speed tuning.

    PubMed

    Perrone, John A

    2005-01-26

    We have previously presented a model of how neurons in the primate middle temporal (MT/V5) area can develop selectivity for image speed by using common properties of the V1 neurons that precede them in the visual motion pathway (J. A. Perrone & A. Thiele, 2002). The motion sensor developed in this model is based on two broad classes of V1 complex neurons (sustained and transient). The S-type neuron has low-pass temporal frequency tuning, p(omega), and the T-type has band-pass temporal frequency tuning, m(omega). The outputs from the S and T neurons are combined in a special way (weighted intersection mechanism [WIM]) to generate a sensor tuned to a particular speed, v. Here I go on to show that if the S and T temporal frequency tuning functions have a particular form (i.e., p(omega)/(m(omega) = k/omega), then a motion sensor with variable speed tuning can be generated from just two V1 neurons. A simple scaling of the S- or T-type neuron output before it is incorporated into the WIM model produces a motion sensor that can be tuned to a wide continuous range of optimal speeds.

  19. An optimal tuning strategy for tidal turbines

    PubMed Central

    2016-01-01

    Tuning wind and tidal turbines is critical to maximizing their power output. Adopting a wind turbine tuning strategy of maximizing the output at any given time is shown to be an extremely poor strategy for large arrays of tidal turbines in channels. This ‘impatient-tuning strategy’ results in far lower power output, much higher structural loads and greater environmental impacts due to flow reduction than an existing ‘patient-tuning strategy’ which maximizes the power output averaged over the tidal cycle. This paper presents a ‘smart patient tuning strategy’, which can increase array output by up to 35% over the existing strategy. This smart strategy forgoes some power generation early in the half tidal cycle in order to allow stronger flows to develop later in the cycle. It extracts enough power from these stronger flows to produce more power from the cycle as a whole than the existing strategy. Surprisingly, the smart strategy can often extract more power without increasing maximum structural loads on the turbines, while also maintaining stronger flows along the channel. This paper also shows that, counterintuitively, for some tuning strategies imposing a cap on turbine power output to limit loads can increase a turbine’s average power output. PMID:27956870

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Kewisch, J.; Huang, H.

    At RHIC, the spin polarization is preserved with a pair of Siberian snakes on the oppo- site sides in each ring. The polarized proton beam with finite spin tune spread might cross spin resonances multiple times in two cases, one is when beam going through strong spin intrinsic resonances during acceleration, the other is when sweeping spin flipper’ frequency across the spin tune to flip the direction of spin polarization. The consequence is loss of spin polarization in both cases. Therefore, a scheme of min- imizing the spin tune spread by matching the dispersion primes at the two snakes wasmore » introduced based on the fact that the spin tune spread is proportional to the difference of dispersion primes at the two snakes. The scheme was implemented at fixed energies for the spin flipper study and during beam acceleration for better spin polarization transmission efficiency. The effect of minimizing the spin tune spread by matching the dispersion primes was observed and confirmed experimentally. The principle of minimizing the spin tune spread by matching the dispersion primes, the impact on the beam optics, and the effect of a narrower spin tune spread are presented in this report.« less

  1. An optimal tuning strategy for tidal turbines

    NASA Astrophysics Data System (ADS)

    Vennell, Ross

    2016-11-01

    Tuning wind and tidal turbines is critical to maximizing their power output. Adopting a wind turbine tuning strategy of maximizing the output at any given time is shown to be an extremely poor strategy for large arrays of tidal turbines in channels. This `impatient-tuning strategy' results in far lower power output, much higher structural loads and greater environmental impacts due to flow reduction than an existing `patient-tuning strategy' which maximizes the power output averaged over the tidal cycle. This paper presents a `smart patient tuning strategy', which can increase array output by up to 35% over the existing strategy. This smart strategy forgoes some power generation early in the half tidal cycle in order to allow stronger flows to develop later in the cycle. It extracts enough power from these stronger flows to produce more power from the cycle as a whole than the existing strategy. Surprisingly, the smart strategy can often extract more power without increasing maximum structural loads on the turbines, while also maintaining stronger flows along the channel. This paper also shows that, counterintuitively, for some tuning strategies imposing a cap on turbine power output to limit loads can increase a turbine's average power output.

  2. Wide range optofluidically tunable multimode interference fiber laser

    NASA Astrophysics Data System (ADS)

    Antonio-Lopez, J. E.; Sanchez-Mondragon, J. J.; LiKamWa, P.; May-Arrioja, D. A.

    2014-08-01

    An optofluidically tunable fiber laser based on multimode interference (MMI) effects with a wide tuning range is proposed and demonstrated. The tunable mechanism is based on an MMI fiber filter fabricated using a special fiber known as no-core fiber, which is a multimode fiber (MMF) without cladding. Therefore, when the MMI filter is covered by liquid the optical properties of the no-core fiber are modified, which allow us to tune the peak wavelength response of the MMI filter. Rather than applying the liquid on the entire no-core fiber, we change the liquid level along the no-core fiber, which provides a highly linear tuning response. In addition, by selecting the adequate refractive index of the liquid we can also choose the tuning range. We demonstrate the versatility of the optofluidically tunable MMI filter by wavelength tuning two different gain media, erbium doped fiber and a semiconductor optical amplifier, achieving tuning ranges of 55 and 90 nm respectively. In both cases, we achieve side-mode suppression ratios (SMSR) better than 50 dBm with output power variations of less than 0.76 dBm over the whole tuning range.

  3. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Gottwald, James A.; Bliss, Donald B.

    1990-01-01

    The focus is on a noise control method which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. An interior noise reduction called alternate resonance tuning (ART) is described both theoretically and experimentally. Problems dealing with tuning single paneled wall structures for optimum noise reduction using the ART methodology are presented, and three theoretical problems are analyzed. The first analysis is a three dimensional, full acoustic solution for tuning a panel wall composed of repeating sections with four different panel tunings within that section, where the panels are modeled as idealized spring-mass-damper systems. The second analysis is a two dimensional, full acoustic solution for a panel geometry influenced by the effect of a propagating external pressure field such as that which might be associated with propeller passage by a fuselage. To reduce the analysis complexity, idealized spring-mass-damper panels are again employed. The final theoretical analysis presents the general four panel problem with real panel sections, where the effect of higher structural modes is discussed. Results from an experimental program highlight real applications of the ART concept and show the effectiveness of the tuning on real structures.

  4. An optimal tuning strategy for tidal turbines.

    PubMed

    Vennell, Ross

    2016-11-01

    Tuning wind and tidal turbines is critical to maximizing their power output. Adopting a wind turbine tuning strategy of maximizing the output at any given time is shown to be an extremely poor strategy for large arrays of tidal turbines in channels. This 'impatient-tuning strategy' results in far lower power output, much higher structural loads and greater environmental impacts due to flow reduction than an existing 'patient-tuning strategy' which maximizes the power output averaged over the tidal cycle. This paper presents a 'smart patient tuning strategy', which can increase array output by up to 35% over the existing strategy. This smart strategy forgoes some power generation early in the half tidal cycle in order to allow stronger flows to develop later in the cycle. It extracts enough power from these stronger flows to produce more power from the cycle as a whole than the existing strategy. Surprisingly, the smart strategy can often extract more power without increasing maximum structural loads on the turbines, while also maintaining stronger flows along the channel. This paper also shows that, counterintuitively, for some tuning strategies imposing a cap on turbine power output to limit loads can increase a turbine's average power output.

  5. Facile formation of biomimetic color-tuned superhydrophobic magnesium alloy with corrosion resistance.

    PubMed

    Ishizaki, Takahiro; Sakamoto, Michiru

    2011-03-15

    The design of color-tuned magnesium alloy with anticorrosive properties and damping capacity was created by means of a simple and inexpensive method. The vertically self-aligned nano- and microsheets were formed on magnesium alloy AZ31 by a chemical-free immersion process in ultrapure water at a temperature of 120 °C, resulting in the color expression. The color changed from silver with metallic luster to some specific colors such as orange, green, and orchid, depending on the immersion time. The color-tuned magnesium alloy showed anticorrosive performance and damping capacity. In addition, the colored surface with minute surface textures was modified with n-octadecyltrimethoxysilane (ODS), leading to the formation of color-tuned superhydrophobic surfaces. The corrosion resistance of the color-tuned superhydrophobic magnesium alloy was also investigated using electrochemical potentiodynamic measurements. Moreover, the color-tuned superhydrophobic magnesium alloy showed high hydrophobicity not just for pure water but also for corrosive liquids, such as acidic, basic, and some aqueous salt solutions. In addition, the American Society for Testing and Materials (ASTM) standard D 3359-02 cross cut tape test was performed to investigate the adhesion of the color-tuned superhydrophobic film to the magnesium alloy surface.

  6. McMillan Lens in a System with Space Charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobach, I.; Nagaitsev, S.; Stern, E.

    Space charge (SC) in a circulating beam in a ring produces both betatron tune shift and betatron tune spread. These effects make some particles move on to a machine resonance and become unstable. Linear elements of beam optics cannot reduce the tune spread induced by SC because of its intrinsic nonlinear nature. We investigate the possibility to mitigate it by a thin McMillan lens providing a nonlinear axially symmetric kick, which is qualitatively opposite to the accumulated kick by SC. Experimentally, the proposed concept can be tested in Fermilab's IOTA ring. A thin McMillan lens can be implemented by amore » short (70 cm) insertion of an electron beam with specifically chosen density distribution in transverse directions. In this article, to see if McMillan lenses reduce the tune spread induced by SC, we make several simulations with particle tracking code Synergia. We choose such beam and lattice parameters that tune spread is roughly 0.5 and a beam instability due to the half-integer resonance 0.5 is observed. Then, we try to reduce emittance growth by shifting betatron tunes by adjusting quadrupoles and reducing the tune spread by McMillan lenses.« less

  7. Research for Future Training Modeling and Simulation Strategies

    DTIC Science & Technology

    2011-09-01

    it developed an “ecosystem” for the content industry—first for iTunes and now in the iPad for publishers and gamers. The iTunes Store that Apple...launched in 2003 provides an excellent analogy to training users. Initially, users could purchase 200,000 iTunes items. Today, the store has over...its iPod and iTune Store has fundamentally changed the music industry and the way the end users expect to buy things. iPod owners used to buy albums

  8. Self tuning control of wind-diesel power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mufti, M.D.; Balasubramanian, R.; Tripathy, S.C.

    1995-12-31

    This paper proposes some effective self-tuning control strategies for isolated Wind-Diesel power generation systems. Detailed modeling and studies on both single-input single-output (SISO) as well as multi-input multi-output (MIMO) self tuning regulators, applied to a typical system, are reported. Further, the effect of introducing a Super-conducting Magnetic Energy Storage (SMES) unit on the system performance has been investigated. The MIMO self-tuning regulator controlling the hybrid system and the SMES in a coordinated manner exhibits the best performance.

  9. Study of the Power Supply Ripple Effect on teh Dynamics at SPEAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terebilo, A.; Pellegrini, C.; /UCLA

    For long term stability analysis time variation of tunes is important. We have proposed and tested a technique for measuring the magnitude of this variation. This was made possible by using tune extraction algorithms that require small number of turns thus giving an instantaneous tune of the machine. In this paper we demonstrate the measured effect of the tune modulation with 60 Hz power supplies ripple, power line interference from SLAC linac operating at 30 Hz repetition rate, and nonperiodic variation.

  10. A novel scaling law relating the geometrical dimensions of a photocathode radio frequency gun to its radio frequency properties

    NASA Astrophysics Data System (ADS)

    Lal, Shankar; Pant, K. K.; Krishnagopal, S.

    2011-12-01

    Developing a photocathode RF gun with the desired RF properties of the π-mode, such as field balance (eb) ˜1, resonant frequency fπ = 2856 MHz, and waveguide-to-cavity coupling coefficient βπ ˜1, requires precise tuning of the resonant frequencies of the independent full- and half-cells (ff and fh), and of the waveguide-to-full-cell coupling coefficient (βf). While contemporary electromagnetic codes and precision machining capability have made it possible to design and tune independent cells of a photocathode RF gun for desired RF properties, thereby eliminating the need for tuning, access to such computational resources and quality of machining is not very widespread. Therefore, many such structures require tuning after machining by employing conventional tuning techniques that are iterative in nature. Any procedure that improves understanding of the tuning process and consequently reduces the number of iterations and the associated risks in tuning a photocathode gun would, therefore, be useful. In this paper, we discuss a method devised by us to tune a photocathode RF gun for desired RF properties under operating conditions. We develop and employ a simple scaling law that accounts for inter-dependence between frequency of independent cells and waveguide-to-cavity coupling coefficient, and the effect of brazing clearance for joining of the two cells. The method has been employed to successfully develop multiple 1.6 cell BNL/SLAC/UCLA type S-band photocathode RF guns with the desired RF properties, without the need to tune them by a tiresome cut-and-measure process. Our analysis also provides a physical insight into how the geometrical dimensions affect the RF properties of the photo-cathode RF gun.

  11. Cochlear microphonic broad tuning curves

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.; Searchfield, Grant D.; Razali, Najwani

    2015-12-01

    It is known that the cochlear microphonic voltage exhibits much broader tuning than does the basilar membrane motion. The most commonly used explanation for this is that when an electrode is inserted at a particular point inside the scala media, the microphonic potentials of neighbouring hair cells have different phases, leading to cancelation at the electrodes location. In situ recording of functioning outer hair cells (OHCs) for investigating this hypothesis is exceptionally difficult. Therefore, to investigate the discrepancy between the tuning curves of the basilar membrane and those of the cochlear microphonic, and the effect of phase cancellation of adjacent hair cells on the broadness of the cochlear microphonic tuning curves, we use an electromechanical model of the cochlea to devise an experiment. We explore the effect of adjacent hair cells (i.e., longitudinal phase cancellation) on the broadness of the cochlear microphonic tuning curves in different locations. The results of the experiment indicate that active longitudinal coupling (i.e., coupling with active adjacent outer hair cells) only slightly changes the broadness of the CM tuning curves. The results also demonstrate that there is a π phase difference between the potentials produced by the hair bundle and the soma near the place associated with the characteristic frequency based on place-frequency maps (i.e., the best place). We suggest that the transversal phase cancellation (caused by the phase difference between the hair bundle and the soma) plays a far more important role than longitudinal phase cancellation in the broadness of the cochlear microphonic tuning curves. Moreover, by increasing the modelled longitudinal resistance resulting the cochlear microphonic curves exhibiting sharper tuning. The results of the simulations suggest that the passive network of the organ of Corti determines the phase difference between the hair bundle and soma, and hence determines the sharpness of the cochlear microphonic tuning curves.

  12. Pruning or tuning? Maturational profiles of face specialization during typical development.

    PubMed

    Zhu, Xun; Bhatt, Ramesh S; Joseph, Jane E

    2016-06-01

    Face processing undergoes significant developmental change with age. Two kinds of developmental changes in face specialization were examined in this study: specialized maturation, or the continued tuning of a region to faces but little change in the tuning to other categories; and competitive interactions, or the continued tuning to faces accompanied by decreased tuning to nonfaces (i.e., pruning). Using fMRI, in regions where adults showed a face preference, a face- and object-specialization index were computed for younger children (5-8 years), older children (9-12 years) and adults (18-45 years). The specialization index was scaled to each subject's maximum activation magnitude in each region to control for overall age differences in the activation level. Although no regions showed significant face specialization in the younger age group, regions strongly associated with social cognition (e.g., right posterior superior temporal sulcus, right inferior orbital cortex) showed specialized maturation, in which tuning to faces increased with age but there was no pruning of nonface responses. Conversely, regions that are associated with more basic perceptual processing or motor mirroring (right middle temporal cortex, right inferior occipital cortex, right inferior frontal opercular cortex) showed competitive interactions in which tuning to faces was accompanied by pruning of object responses with age. The overall findings suggest that cortical maturation for face processing is regional-specific and involves both increased tuning to faces and diminished response to nonfaces. Regions that show competitive interactions likely support a more generalized function that is co-opted for face processing with development, whereas regions that show specialized maturation increase their tuning to faces, potentially in an activity-dependent, experience-driven manner.

  13. A parametric model and estimation techniques for the inharmonicity and tuning of the piano.

    PubMed

    Rigaud, François; David, Bertrand; Daudet, Laurent

    2013-05-01

    Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.

  14. Audience-tuning effects on memory: the role of shared reality.

    PubMed

    Echterhoff, Gerald; Higgins, E Tory; Groll, Stephan

    2005-09-01

    After tuning to an audience, communicators' own memories for the topic often reflect the biased view expressed in their messages. Three studies examined explanations for this bias. Memories for a target person were biased when feedback signaled the audience's successful identification of the target but not after failed identification (Experiment 1). Whereas communicators tuning to an in-group audience exhibited the bias, communicators tuning to an out-group audience did not (Experiment 2). These differences did not depend on communicators' mood but were mediated by communicators' trust in their audience's judgment about other people (Experiments 2 and 3). Message and memory were more closely associated for high than for low trusters. Apparently, audience-tuning effects depend on the communicators' experience of a shared reality.

  15. Continuous tuning of two-section, single-mode terahertz quantum-cascade lasers by fiber-coupled, near-infrared illumination

    NASA Astrophysics Data System (ADS)

    Hempel, Martin; Röben, Benjamin; Niehle, Michael; Schrottke, Lutz; Trampert, Achim; Grahn, Holger T.

    2017-05-01

    The dynamical tuning due to rear facet illumination of single-mode, terahertz (THz) quantum-cascade lasers (QCLs) which employ distributed feedback gratings are compared to the tuning of single-mode QCLs based on two-section cavities. The THz QCLs under investigation emit in the range of 3 to 4.7 THz. The tuning is achieved by illuminating the rear facet of the QCL with a fiber-coupled light source emitting at 777 nm. Tuning ranges of 5.0 and 11.9 GHz under continuous-wave and pulsed operation, respectively, are demonstrated for a single-mode, two-section cavity QCL emitting at about 3.1 THz, which exhibits a side-mode suppression ratio better than -25 dB.

  16. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  17. Even subtle cultural differences affect face tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Fallgatter, Andreas J; Barisnikov, Koviljka

    2018-01-01

    Culture shapes social cognition in many ways. Yet cultural impact on face tuning remains largely unclear. Here typically developing females and males from the French-speaking part of Switzerland were presented with a set of Arcimboldo-like Face-n-Food images composed of food ingredients and in different degree resembling a face. The outcome had been compared with previous findings obtained in young adults of the South-West Germany. In that study, males exhibit higher thresholds for face tuning on the Face-n-Food task than females. In Swiss participants, no gender differences exist in face tuning. Strikingly, males from the French-speaking part of Switzerland possess higher sensitivity to faces than their German peers, whereas no difference in face tuning occurs between females. The outcome indicates that even relatively subtle cultural differences as well as culture by gender interaction can modulate social cognition. Clarification of the nature of cultural impact on face tuning as well as social cognition at large is of substantial value for understanding a wide range of neuropsychiatric and neurodevelopmental conditions.

  18. Effects of timbre and tempo change on memory for music.

    PubMed

    Halpern, Andrea R; Müllensiefen, Daniel

    2008-09-01

    We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.

  19. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  20. Precision corrections to fine tuning in SUSY

    DOE PAGES

    Buckley, Matthew R.; Monteux, Angelo; Shih, David

    2017-06-20

    Requiring that the contributions of supersymmetric particles to the Higgs mass are not highly tuned places upper limits on the masses of superpartners — in particular the higgsino, stop, and gluino. We revisit the details of the tuning calculation and introduce a number of improvements, including RGE resummation, two-loop effects, a proper treatment of UV vs. IR masses, and threshold corrections. This improved calculation more accurately connects the tuning measure with the physical masses of the superpartners at LHC-accessible energies. After these refinements, the tuning bound on the stop is now also sensitive to the masses of the 1st andmore » 2nd generation squarks, which limits how far these can be decoupled in Effective SUSY scenarios. We find that, for a fixed level of tuning, our bounds can allow for heavier gluinos and stops than previously considered. Despite this, the natural region of supersymmetry is under pressure from the LHC constraints, with high messenger scales particularly disfavored.« less

Top