Authoring of Adaptive Computer Assisted Assessment of Free-Text Answers
ERIC Educational Resources Information Center
Alfonseca, Enrique; Carro, Rosa M.; Freire, Manuel; Ortigosa, Alvaro; Perez, Diana; Rodriguez, Pilar
2005-01-01
Adaptation techniques can be applied not only to the multimedia contents or navigational possibilities of a course, but also to the assessment. In order to facilitate the authoring of adaptive free-text assessment and its integration within adaptive web-based courses, Adaptive Hypermedia techniques and Free-text Computer Assisted Assessment are…
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
Quality factors and local adaption (with applications in Eulerian hydrodynamics)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, W.P.
1992-06-17
Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less
The adaptive observer. [liapunov synthesis, single-input single-output, and reduced observers
NASA Technical Reports Server (NTRS)
Carroll, R. L.
1973-01-01
The simple generation of state from available measurements, for use in systems for which the criteria defining the acceptable state behavior mandates a control that is dependent upon unavailable measurement is described as an adaptive means for determining the state of a linear time invariant differential system having unknown parameters. A single input output adaptive observer and the reduced adaptive observer is developed. The basic ideas for both the adaptive observer and the nonadaptive observer are examined. A survey of the Liapunov synthesis technique is taken, and the technique is applied to adaptive algorithm for the adaptive observer.
ERIC Educational Resources Information Center
Hinton, Devon E.; Pich, Vuth; Hofmann, Stefan G.; Otto, Michael W.
2013-01-01
In this article we illustrate how we utilize acceptance and mindfulness techniques in our treatment (Culturally Adapted CBT, or CA-CBT) for traumatized refugees and ethnic minority populations. We present a Nodal Network Model (NNM) of Affect to explain the treatment's emphasis on body-centered mindfulness techniques and its focus on psychological…
Adaptive Educational Software by Applying Reinforcement Learning
ERIC Educational Resources Information Center
Bennane, Abdellah
2013-01-01
The introduction of the intelligence in teaching software is the object of this paper. In software elaboration process, one uses some learning techniques in order to adapt the teaching software to characteristics of student. Generally, one uses the artificial intelligence techniques like reinforcement learning, Bayesian network in order to adapt…
Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi
Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.
Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique
Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi
2017-06-01
Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.
A Deformable Smart Skin for Continuous Sensing Based on Electrical Impedance Tomography.
Visentin, Francesco; Fiorini, Paolo; Suzuki, Kenji
2016-11-16
In this paper, we present a low-cost, adaptable, and flexible pressure sensor that can be applied as a smart skin over both stiff and deformable media. The sensor can be easily adapted for use in applications related to the fields of robotics, rehabilitation, or costumer electronic devices. In order to remove most of the stiff components that block the flexibility of the sensor, we based the sensing capability on the use of a tomographic technique known as Electrical Impedance Tomography. The technique allows the internal structure of the domain under study to be inferred by reconstructing its conductivity map. By applying the technique to a material that changes its resistivity according to applied forces, it is possible to identify these changes and then localise the area where the force was applied. We tested the system when applied to flat and curved surfaces. For all configurations, we evaluate the artificial skin capabilities to detect forces applied over a single point, over multiple points, and changes in the underlying geometry. The results are all promising, and open the way for the application of such sensors in different robotic contexts where deformability is the key point.
A Deformable Smart Skin for Continuous Sensing Based on Electrical Impedance Tomography
Visentin, Francesco; Fiorini, Paolo; Suzuki, Kenji
2016-01-01
In this paper, we present a low-cost, adaptable, and flexible pressure sensor that can be applied as a smart skin over both stiff and deformable media. The sensor can be easily adapted for use in applications related to the fields of robotics, rehabilitation, or costumer electronic devices. In order to remove most of the stiff components that block the flexibility of the sensor, we based the sensing capability on the use of a tomographic technique known as Electrical Impedance Tomography. The technique allows the internal structure of the domain under study to be inferred by reconstructing its conductivity map. By applying the technique to a material that changes its resistivity according to applied forces, it is possible to identify these changes and then localise the area where the force was applied. We tested the system when applied to flat and curved surfaces. For all configurations, we evaluate the artificial skin capabilities to detect forces applied over a single point, over multiple points, and changes in the underlying geometry. The results are all promising, and open the way for the application of such sensors in different robotic contexts where deformability is the key point. PMID:27854325
Adaptive Control Using Residual Mode Filters Applied to Wind Turbines
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Balas, Mark J.
2011-01-01
Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.
Development of adaptive control applied to chaotic systems
NASA Astrophysics Data System (ADS)
Rhode, Martin Andreas
1997-12-01
Continuous-time derivative control and adaptive map-based recursive feedback control techniques are used to control chaos in a variety of systems and in situations that are of practical interest. The theoretical part of the research includes the review of fundamental concept of control theory in the context of its applications to deterministic chaotic systems, the development of a new adaptive algorithm to identify the linear system properties necessary for control, and the extension of the recursive proportional feedback control technique, RPF, to high dimensional systems. Chaos control was applied to models of a thermal pulsed combustor, electro-chemical dissolution and the hyperchaotic Rossler system. Important implications for combustion engineering were suggested by successful control of the model of the thermal pulsed combustor. The system was automatically tracked while maintaining control into regions of parameter and state space where no stable attractors exist. In a simulation of the electrochemical dissolution system, application of derivative control to stabilize a steady state, and adaptive RPF to stabilize a period one orbit, was demonstrated. The high dimensional adaptive control algorithm was applied in a simulation using the Rossler hyperchaotic system, where a period-two orbit with two unstable directions was stabilized and tracked over a wide range of a system parameter. In the experimental part, the electrochemical system was studied in parameter space, by scanning the applied potential and the frequency of the rotating copper disk. The automated control algorithm is demonstrated to be effective when applied to stabilize a period-one orbit in the experiment. We show the necessity of small random perturbations applied to the system in order to both learn the dynamics and control the system at the same time. The simultaneous learning and control capability is shown to be an important part of the active feedback control.
Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel
2017-06-15
Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.
Adaptive vibration control of structures under earthquakes
NASA Astrophysics Data System (ADS)
Lew, Jiann-Shiun; Juang, Jer-Nan; Loh, Chin-Hsiung
2017-04-01
techniques, for structural vibration suppression under earthquakes. Various control strategies have been developed to protect structures from natural hazards and improve the comfort of occupants in buildings. However, there has been little development of adaptive building control with the integration of real-time system identification and control design. Generalized predictive control, which combines the process of real-time system identification and the process of predictive control design, has received widespread acceptance and has been successfully applied to various test-beds. This paper presents a formulation of the predictive control scheme for adaptive vibration control of structures under earthquakes. Comprehensive simulations are performed to demonstrate and validate the proposed adaptive control technique for earthquake-induced vibration of a building.
Model-Biased, Data-Driven Adaptive Failure Prediction
NASA Technical Reports Server (NTRS)
Leen, Todd K.
2004-01-01
This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-06-01
A new anisotropic hr-adaptive mesh technique has been applied to modelling of multiscale transport phenomena, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been setup for two-dimensional (2-D) transport phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Ni, Ming-Jiu, E-mail: mjni@ucas.ac.cn
2014-01-01
The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interfacemore » that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.« less
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Multireference adaptive noise canceling applied to the EEG.
James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J
1997-08-01
The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.
Investigations in adaptive processing of multispectral data
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Horwitz, H. M.
1973-01-01
Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.
Adaptive control of an exoskeleton robot with uncertainties on kinematics and dynamics.
Brahmi, Brahim; Saad, Maarouf; Ochoa-Luna, Cristobal; Rahman, Mohammad H
2017-07-01
In this paper, we propose a new adaptive control technique based on nonlinear sliding mode control (JSTDE) taking into account kinematics and dynamics uncertainties. This approach is applied to an exoskeleton robot with uncertain kinematics and dynamics. The adaptation design is based on Time Delay Estimation (TDE). The proposed strategy does not necessitate the well-defined dynamic and kinematic models of the system robot. The updated laws are designed using Lyapunov-function to solve the adaptation problem systematically, proving the close loop stability and ensuring the convergence asymptotically of the outputs tracking errors. Experiments results show the effectiveness and feasibility of JSTDE technique to deal with the variation of the unknown nonlinear dynamics and kinematics of the exoskeleton model.
NASA Astrophysics Data System (ADS)
Stupin, Daniil D.; Koniakhin, Sergei V.; Verlov, Nikolay A.; Dubina, Michael V.
2017-05-01
The time-domain technique for impedance spectroscopy consists of computing the excitation voltage and current response Fourier images by fast or discrete Fourier transformation and calculating their relation. Here we propose an alternative method for excitation voltage and current response processing for deriving a system impedance spectrum based on a fast and flexible adaptive filtering method. We show the equivalence between the problem of adaptive filter learning and deriving the system impedance spectrum. To be specific, we express the impedance via the adaptive filter weight coefficients. The noise-canceling property of adaptive filtering is also justified. Using the RLC circuit as a model system, we experimentally show that adaptive filtering yields correct admittance spectra and elements ratings in the high-noise conditions when the Fourier-transform technique fails. Providing the additional sensitivity of impedance spectroscopy, adaptive filtering can be applied to otherwise impossible-to-interpret time-domain impedance data. The advantages of adaptive filtering are justified with practical living-cell impedance measurements.
NIR technique in the classification of cotton leaf grade
USDA-ARS?s Scientific Manuscript database
Near infrared (NIR) spectroscopy, a useful technique due to the speed, ease of use, and adaptability to on-line or off-line implementation, has been applied to perform the qualitative classification and quantitative prediction of cotton quality characteristics, including trash index. One term to as...
Lobzin, V S; Tsatskina, N D
1989-01-01
A total of 192 patients with Bell paralysis were studied. In 32 a technique of biofeedback training was applied to accelerate the restoration of mimetic muscles with EMG feedback. Clinical and electrophysiological data confirmed the efficiency of this technique in terms of considerably accelerated rehabilitation.
Practical techniques for enhancing the high-frequency MASW method
USDA-ARS?s Scientific Manuscript database
For soil exploration in the vadose zone, a high-frequency multi-channel analysis of surface waves (HF-MASW) method has been developed. In the study, several practical techniques were applied to enhance the overtone image of the HF-MASW method. They included (1) the self-adaptive MASW method using a ...
Exploring NIR technique in rapid prediction of cotton trash components
USDA-ARS?s Scientific Manuscript database
Near infrared (NIR) spectroscopy, a useful technique due to the speed, ease of use, and adaptability to on-line or off-line implementation, has been applied to perform the qualitative classification and quantitative prediction on a number of cotton quality indices, including cotton trash from HVI, S...
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems
ERIC Educational Resources Information Center
Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul
2009-01-01
Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…
Efficient generation of discontinuity-preserving adaptive triangulations from range images.
Garcia, Miguel Angel; Sappa, Angel Domingo
2004-10-01
This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.
Speckle noise reduction of 1-look SAR imagery
NASA Technical Reports Server (NTRS)
Nathan, Krishna S.; Curlander, John C.
1987-01-01
Speckle noise is inherent to synthetic aperture radar (SAR) imagery. Since the degradation of the image due to this noise results in uncertainties in the interpretation of the scene and in a loss of apparent resolution, it is desirable to filter the image to reduce this noise. In this paper, an adaptive algorithm based on the calculation of the local statistics around a pixel is applied to 1-look SAR imagery. The filter adapts to the nonstationarity of the image statistics since the size of the blocks is very small compared to that of the image. The performance of the filter is measured in terms of the equivalent number of looks (ENL) of the filtered image and the resulting resolution degradation. The results are compared to those obtained from different techniques applied to similar data. The local adaptive filter (LAF) significantly increases the ENL of the final image. The associated loss of resolution is also lower than that for other commonly used speckle reduction techniques.
Grid generation for the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Erlebacher, Gordon
1989-01-01
A general survey of grid generators is presented with a concern for understanding why grids are necessary, how they are applied, and how they are generated. After an examination of the need for meshes, the overall applications setting is established with a categorization of the various connectivity patterns. This is split between structured grids and unstructured meshes. Altogether, the categorization establishes the foundation upon which grid generation techniques are developed. The two primary categories are algebraic techniques and partial differential equation techniques. These are each split into basic parts, and accordingly are individually examined in some detail. In the process, the interrelations between the various parts are accented. From the established background in the primary techniques, consideration is shifted to the topic of interactive grid generation and then to adaptive meshes. The setting for adaptivity is established with a suitable means to monitor severe solution behavior. Adaptive grids are considered first and are followed by adaptive triangular meshes. Then the consideration shifts to the temporal coupling between grid generators and PDE-solvers. To conclude, a reflection upon the discussion, herein, is given.
Grid generation for the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Erlebacher, Gordon
1987-01-01
A general survey of grid generators is presented with a concern for understanding why grids are necessary, how they are applied, and how they are generated. After an examination of the need for meshes, the overall applications setting is established with a categorization of the various connectivity patterns. This is split between structured grids and unstructured meshes. Altogether, the categorization establishes the foundation upon which grid generation techniques are developed. The two primary categories are algebraic techniques and partial differential equation techniques. These are each split into basic parts, and accordingly are individually examined in some detail. In the process, the interrelations between the various parts are accented. From the established background in the primary techniques, consideration is shifted to the topic of interactive grid generation and then to adaptive meshes. The setting for adaptivity is established with a suitable means to monitor severe solution behavior. Adaptive grids are considered first and are followed by adaptive triangular meshes. Then the consideration shifts to the temporal coupling between grid generators and PDE-solvers. To conclude, a reflection upon the discussion, herein, is given.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
NASA Astrophysics Data System (ADS)
Ibrahim, Wael Refaat Anis
The present research involves the development of several fuzzy expert systems for power quality analysis and diagnosis. Intelligent systems for the prediction of abnormal system operation were also developed. The performance of all intelligent modules developed was either enhanced or completely produced through adaptive fuzzy learning techniques. Neuro-fuzzy learning is the main adaptive technique utilized. The work presents a novel approach to the interpretation of power quality from the perspective of the continuous operation of a single system. The research includes an extensive literature review pertaining to the applications of intelligent systems to power quality analysis. Basic definitions and signature events related to power quality are introduced. In addition, detailed discussions of various artificial intelligence paradigms as well as wavelet theory are included. A fuzzy-based intelligent system capable of identifying normal from abnormal operation for a given system was developed. Adaptive neuro-fuzzy learning was applied to enhance its performance. A group of fuzzy expert systems that could perform full operational diagnosis were also developed successfully. The developed systems were applied to the operational diagnosis of 3-phase induction motors and rectifier bridges. A novel approach for learning power quality waveforms and trends was developed. The technique, which is adaptive neuro fuzzy-based, learned, compressed, and stored the waveform data. The new technique was successfully tested using a wide variety of power quality signature waveforms, and using real site data. The trend-learning technique was incorporated into a fuzzy expert system that was designed to predict abnormal operation of a monitored system. The intelligent system learns and stores, in compressed format, trends leading to abnormal operation. The system then compares incoming data to the retained trends continuously. If the incoming data matches any of the learned trends, an alarm is instigated predicting the advent of system abnormal operation. The incoming data could be compared to previous trends as well as matched to trends developed through computer simulations and stored using fuzzy learning.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
The design and implementation of radar clutter modelling and adaptive target detection techniques
NASA Astrophysics Data System (ADS)
Ali, Mohammed Hussain
The analysis and reduction of radar clutter is investigated. Clutter is the term applied to unwanted radar reflections from land, sea, precipitation, and/or man-made objects. A great deal of useful information regarding the characteristics of clutter can be obtained by the application of frequency domain analytical methods. Thus, some considerable time was spent assessing the various techniques available and their possible application to radar clutter. In order to better understand clutter, use of a clutter model was considered desirable. There are many techniques which will enable a target to be detected in the presence of clutter. One of the most flexible of these is that of adaptive filtering. This technique was thoroughly investigated and a method for improving its efficacy was devised. The modified adaptive filter employed differential adaption times to enhance detectability. Adaptation time as a factor relating to target detectability is a new concept and was investigated in some detail. It was considered desirable to implement the theoretical work in dedicated hardware to confirm that the modified clutter model and the adaptive filter technique actually performed as predicted. The equipment produced is capable of operation in real time and provides an insight into real time DSP applications. This equipment is sufficiently rapid to produce a real time display on the actual PPI system. Finally a software package was also produced which would simulate the operation of a PPI display and thus ease the interpretation of the filter outputs.
A Phase-Only technique for enhancing the high-frequency MASW method
USDA-ARS?s Scientific Manuscript database
For soil exploration in the vadose zone, a high-frequency multi-channel analysis of surface waves (HF-MASW) method has been developed. In the study, several practical techniques were applied to enhance the overtone image of the HF-MASW method. They included (1) the self-adaptive MASW method using a ...
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
Geology orbiter comparison study
NASA Technical Reports Server (NTRS)
Cutts, J. A. J.; Blasius, K. R.; Davis, D. R.; Pang, K. D.; Shreve, D. C.
1977-01-01
Instrument requirements of planetary geology orbiters were examined with the objective of determining the feasibility of applying standard instrument designs to a host of terrestrial targets. Within the basic discipline area of geochemistry, gamma-ray, X-ray fluorescence, and atomic spectroscopy remote sensing techniques were considered. Within the discipline area of geophysics, the complementary techniques of gravimetry and radar were studied. Experiments using these techniques were analyzed for comparison at the Moon, Mercury, Mars and the Galilean satellites. On the basis of these comparative assessments, the adaptability of each sensing technique was judged as a basic technique for many targets, as a single instrument applied to many targets, as a single instrument used in different mission modes, and as an instrument capability for nongeoscience objectives.
Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin
2015-01-01
A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.
An Approach to V&V of Embedded Adaptive Systems
NASA Technical Reports Server (NTRS)
Liu, Yan; Yerramalla, Sampath; Fuller, Edgar; Cukic, Bojan; Gururajan, Srikaruth
2004-01-01
Rigorous Verification and Validation (V&V) techniques are essential for high assurance systems. Lately, the performance of some of these systems is enhanced by embedded adaptive components in order to cope with environmental changes. Although the ability of adapting is appealing, it actually poses a problem in terms of V&V. Since uncertainties induced by environmental changes have a significant impact on system behavior, the applicability of conventional V&V techniques is limited. In safety-critical applications such as flight control system, the mechanisms of change must be observed, diagnosed, accommodated and well understood prior to deployment. In this paper, we propose a non-conventional V&V approach suitable for online adaptive systems. We apply our approach to an intelligent flight control system that employs a particular type of Neural Networks (NN) as the adaptive learning paradigm. Presented methodology consists of a novelty detection technique and online stability monitoring tools. The novelty detection technique is based on Support Vector Data Description that detects novel (abnormal) data patterns. The Online Stability Monitoring tools based on Lyapunov's Stability Theory detect unstable learning behavior in neural networks. Cases studies based on a high fidelity simulator of NASA's Intelligent Flight Control System demonstrate a successful application of the presented V&V methodology. ,
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-10-01
An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.
ERIC Educational Resources Information Center
Weiss, J.; Egea-Cortines, M.
2008-01-01
We have been teaching applied molecular genetics to engineers and adapted the teaching methodology to the European Credit Transfer System. We teach core principles of genetics that are universal and form the conceptual basis of most molecular technologies. The course then teaches widely used techniques and finally shows how different techniques…
NASA Astrophysics Data System (ADS)
Huang, Xiaolei; Dong, Hui; Qiu, Yang; Li, Bo; Tao, Quan; Zhang, Yi; Krause, Hans-Joachim; Offenhäusser, Andreas; Xie, Xiaoming
2018-01-01
Power-line harmonic interference and fixed-frequency noise peaks may cause stripe-artifacts in ultra-low field (ULF) magnetic resonance imaging (MRI) in an unshielded environment and in a conductively shielded room. In this paper we describe an adaptive suppression method to eliminate these artifacts in MRI images. This technique utilizes spatial correlation of the interference from different positions, and is realized by subtracting the outputs of the reference channel(s) from those of the signal channel(s) using wavelet analysis and the least squares method. The adaptive suppression method is first implemented to remove the image artifacts in simulation. We then experimentally demonstrate the feasibility of this technique by adding three orthogonal superconducting quantum interference device (SQUID) magnetometers as reference channels to compensate the output of one 2nd-order gradiometer. The experimental results show great improvement in the imaging quality in both 1D and 2D MRI images at two common imaging frequencies, 1.3 kHz and 4.8 kHz. At both frequencies, the effective compensation bandwidth is as high as 2 kHz. Furthermore, we examine the longitudinal relaxation times of the same sample before and after compensation, and show that the MRI properties of the sample did not change after applying adaptive suppression. This technique can effectively increase the imaging bandwidth and be applied to ULF MRI detected by either SQUIDs or Faraday coil in both an unshielded environment and a conductively shielded room.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing
2017-11-01
The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
An Approach for Autonomy: A Collaborative Communication Framework for Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Dufrene, Warren Russell, Jr.
2005-01-01
Research done during the last three years has studied the emersion properties of Complex Adaptive Systems (CAS). The deployment of Artificial Intelligence (AI) techniques applied to remote Unmanned Aerial Vehicles has led the author to investigate applications of CAS within the field of Autonomous Multi-Agent Systems. The core objective of current research efforts is focused on the simplicity of Intelligent Agents (IA) and the modeling of these agents within complex systems. This research effort looks at the communication, interaction, and adaptability of multi-agents as applied to complex systems control. The embodiment concept applied to robotics has application possibilities within multi-agent frameworks. A new framework for agent awareness within a virtual 3D world concept is possible where the vehicle is composed of collaborative agents. This approach has many possibilities for applications to complex systems. This paper describes the development of an approach to apply this virtual framework to the NASA Goddard Space Flight Center (GSFC) tetrahedron structure developed under the Autonomous Nano Technology Swarm (ANTS) program and the Super Miniaturized Addressable Reconfigurable Technology (SMART) architecture program. These projects represent an innovative set of novel concepts deploying adaptable, self-organizing structures composed of many tetrahedrons. This technology is pushing current applied Agents Concepts to new levels of requirements and adaptability.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, L; Fried, D; Fave, X
Purpose: To investigate how different image preprocessing techniques, their parameters, and the different boundary handling techniques can augment the information of features and improve feature’s differentiating capability. Methods: Twenty-seven NSCLC patients with a solid tumor volume and no visually obvious necrotic regions in the simulation CT images were identified. Fourteen of these patients had a necrotic region visible in their pre-treatment PET images (necrosis group), and thirteen had no visible necrotic region in the pre-treatment PET images (non-necrosis group). We investigated how image preprocessing can impact the ability of radiomics image features extracted from the CT to differentiate between twomore » groups. It is expected the histogram in the necrosis group is more negatively skewed, and the uniformity from the necrosis group is less. Therefore, we analyzed two first order features, skewness and uniformity, on the image inside the GTV in the intensity range [−20HU, 180HU] under the combination of several image preprocessing techniques: (1) applying the isotropic Gaussian or anisotropic diffusion smoothing filter with a range of parameter(Gaussian smoothing: size=11, sigma=0:0.1:2.3; anisotropic smoothing: iteration=4, kappa=0:10:110); (2) applying the boundaryadapted Laplacian filter; and (3) applying the adaptive upper threshold for the intensity range. A 2-tailed T-test was used to evaluate the differentiating capability of CT features on pre-treatment PT necrosis. Result: Without any preprocessing, no differences in either skewness or uniformity were observed between two groups. After applying appropriate Gaussian filters (sigma>=1.3) or anisotropic filters(kappa >=60) with the adaptive upper threshold, skewness was significantly more negative in the necrosis group(p<0.05). By applying the boundary-adapted Laplacian filtering after the appropriate Gaussian filters (0.5 <=sigma<=1.1) or anisotropic filters(20<=kappa <=50), the uniformity was significantly lower in the necrosis group (p<0.05). Conclusion: Appropriate selection of image preprocessing techniques allows radiomics features to extract more useful information and thereby improve prediction models based on these features.« less
Procedures are described for analysis of animal samples using tissue culture techniques that may be adapted for assessment of solid, particulate, liquid and water samples contaminated with Cryptosporidium parvum.
Adaptive Behavior for Mobile Robots
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2009-01-01
The term "System for Mobility and Access to Rough Terrain" (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.
Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)
1999-01-01
A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.
Adaptive optical fluorescence microscopy.
Ji, Na
2017-03-31
The past quarter century has witnessed rapid developments of fluorescence microscopy techniques that enable structural and functional imaging of biological specimens at unprecedented depth and resolution. The performance of these methods in multicellular organisms, however, is degraded by sample-induced optical aberrations. Here I review recent work on incorporating adaptive optics, a technology originally applied in astronomical telescopes to combat atmospheric aberrations, to improve image quality of fluorescence microscopy for biological imaging.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2006-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Free-energy landscapes from adaptively biased methods: Application to quantum systems
NASA Astrophysics Data System (ADS)
Calvo, F.
2010-10-01
Several parallel adaptive biasing methods are applied to the calculation of free-energy pathways along reaction coordinates, choosing as a difficult example the double-funnel landscape of the 38-atom Lennard-Jones cluster. In the case of classical statistics, the Wang-Landau and adaptively biased molecular-dynamics (ABMD) methods are both found efficient if multiple walkers and replication and deletion schemes are used. An extension of the ABMD technique to quantum systems, implemented through the path-integral MD framework, is presented and tested on Ne38 against the quantum superposition method.
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Nonlinear adaptive inverse control via the unified model neural network
NASA Astrophysics Data System (ADS)
Jeng, Jin-Tsong; Lee, Tsu-Tian
1999-03-01
In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers
Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng
2014-01-01
Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Huang, Xiaolei; Dong, Hui; Qiu, Yang; Li, Bo; Tao, Quan; Zhang, Yi; Krause, Hans-Joachim; Offenhäusser, Andreas; Xie, Xiaoming
2018-01-01
Power-line harmonic interference and fixed-frequency noise peaks may cause stripe-artifacts in ultra-low field (ULF) magnetic resonance imaging (MRI) in an unshielded environment and in a conductively shielded room. In this paper we describe an adaptive suppression method to eliminate these artifacts in MRI images. This technique utilizes spatial correlation of the interference from different positions, and is realized by subtracting the outputs of the reference channel(s) from those of the signal channel(s) using wavelet analysis and the least squares method. The adaptive suppression method is first implemented to remove the image artifacts in simulation. We then experimentally demonstrate the feasibility of this technique by adding three orthogonal superconducting quantum interference device (SQUID) magnetometers as reference channels to compensate the output of one 2nd-order gradiometer. The experimental results show great improvement in the imaging quality in both 1D and 2D MRI images at two common imaging frequencies, 1.3 kHz and 4.8 kHz. At both frequencies, the effective compensation bandwidth is as high as 2 kHz. Furthermore, we examine the longitudinal relaxation times of the same sample before and after compensation, and show that the MRI properties of the sample did not change after applying adaptive suppression. This technique can effectively increase the imaging bandwidth and be applied to ULF MRI detected by either SQUIDs or Faraday coil in both an unshielded environment and a conductively shielded room. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
Development of the One-Sided Nonlinear Adaptive Doppler Shift Estimation
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Singh, Upendra N.; Kavaya, Michael J.; Serror, Judith A.
2009-01-01
The new development of a one-sided nonlinear adaptive shift estimation technique (NADSET) is introduced. The background of the algorithm and a brief overview of NADSET are presented. The new technique is applied to the wind parameter estimates from a 2-micron wavelength coherent Doppler lidar system called VALIDAR located in NASA Langley Research Center in Virginia. The new technique enhances wind parameters such as Doppler shift and power estimates in low Signal-To-Noise-Ratio (SNR) regimes using the estimates in high SNR regimes as the algorithm scans the range bins from low to high altitude. The original NADSET utilizes the statistics in both the lower and the higher range bins to refine the wind parameter estimates in between. The results of the two different approaches of NADSET are compared.
Transverse Pupil Shifts for Adaptive Optics Non-Common Path Calibration
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2011-01-01
A simple new way of obtaining absolute wavefront measurements with a laboratory Fizeau interferometer was recently devised. In that case, the observed wavefront map is the difference of two cavity surfaces, those of the mirror under test and of an unknown reference surface on the Fizeau s transmission flat. The absolute surface of each can be determined by applying standard wavefront reconstruction techniques to two grids of absolute surface height differences of the mirror under test, obtained from pairs of measurements made with slight transverse shifts in X and Y. Adaptive optics systems typically provide an actuated periscope between wavefront sensor (WFS) and commonmode optics, used for lateral registration of deformable mirror (DM) to WFS. This periscope permits independent adjustment of either pupil or focal spot incident on the WFS. It would be used to give the required lateral pupil motion between common and non-common segments, analogous to the lateral shifts of the two phase contributions in the lab Fizeau. The technique is based on a completely new approach to calibration of phase. It offers unusual flexibility with regard to the transverse spatial frequency scales probed, and will give results quite quickly, making use of no auxiliary equipment other than that built into the adaptive optics system. The new technique may be applied to provide novel calibration information about other optical systems in which the beam may be shifted transversely in a controlled way.
Vonck, Sharona; Staelens, Anneleen Simone; Bollen, Ine; Broekx, Lien; Gyselaers, Wilfried
2016-10-12
The maternal cardiovascular system adapts quickly when embryo implantation is recognized by the body. Those adaptations play an important role, as a normal cardiovascular adaptation is a requirement for a normal course of pregnancy. Disturbed adaptations predispose to potential hypertensive disorders further in pregnancy [1-3]. This report aims to briefly inform the obstetricians, general practitioners and midwives, who are the key players in detecting and treating hypertensive disorders during pregnancy. The PubMed database was used as main tool to find studies involving clearly defined first trimester hemodynamic changes in normal pregnancies and hypertensive pregnancies. In addition, the bibliographies of these studies were investigated for further relevant literature. A comprehensive overview is given concerning the normal adaptations in the cardiovascular tree in a first trimester pregnancy. Additionally, signs of abnormal cardiovascular changes observed in first trimester are described together with the normal reference range for each non-invasive, easily applicable technique for maternal hemodynamics assessment. With a combination of techniques, it is possible to integrate and evaluate the maternal heart, veins and arteries at 12 weeks of pregnancy. Applying those techniques into the daily clinic opens perspectives to prevention and prophylactic treatment, aiming for a reduction of the risk for hypertension during pregnancy.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Adaptive neural control for a class of nonlinear time-varying delay systems with unknown hysteresis.
Liu, Zhi; Lai, Guanyu; Zhang, Yun; Chen, Xin; Chen, Chun Lung Philip
2014-12-01
This paper investigates the fusion of unknown direction hysteresis model with adaptive neural control techniques in face of time-delayed continuous time nonlinear systems without strict-feedback form. Compared with previous works on the hysteresis phenomenon, the direction of the modified Bouc-Wen hysteresis model investigated in the literature is unknown. To reduce the computation burden in adaptation mechanism, an optimized adaptation method is successfully applied to the control design. Based on the Lyapunov-Krasovskii method, two neural-network-based adaptive control algorithms are constructed to guarantee that all the system states and adaptive parameters remain bounded, and the tracking error converges to an adjustable neighborhood of the origin. In final, some numerical examples are provided to validate the effectiveness of the proposed control methods.
Marginal adaptation of composite resins under two adhesive techniques.
Dačić, Stefan; Veselinović, Aleksandar M; Mitić, Aleksandar; Nikolić, Marija; Cenić, Milica; Dačić-Simonović, Dragica
2016-11-01
In the present research, different adhesive techniques were used to set up fillings with composite resins. After the application of etch and rinse or self etch adhesive technique, marginal adaptation of composite fillings was estimated by the length of margins without gaps, and by the microretention of resin in enamel and dentin. The study material consisted of 40 extracted teeth. Twenty Class V cavities were treated with 35% phosphorous acid and restored after rinsing by Adper Single Bond 2 and Filtek Ultimate-ASB/FU 3M ESPE composite system. The remaining 20 cavities were restored by Adper Easy One-AEO/FU 3M ESPE composite system. Marginal adaptation of composite fillings was examined using a scanning electron microscope (SEM). The etch and rinse adhesive technique showed a significantly higher percentage of margin length without gaps (in enamel: 92.5%, in dentin: 57.3%), compared with the self-etch technique with lower percentage of margin length without gaps, in enamel 70.4% (p < .001), and in dentin-22.6% (p < .05). In the first technique, microretention was composed of adhesive and hybrid layers as well as resin tugs in interprismatic spaces of enamel, while the dentin microretention was composed of adhesive and hybrid layers with resin tugs in dentin canals. In the second technique, resin tugs were rarely seen and a microgap was dominant along the border of restoration margins. The SEM analysis showed a better marginal adaptation of composite resin to enamel and dentin with better microretention when the etch and rinse adhesive procedure was applied. © 2016 Wiley Periodicals, Inc.
Systems design analysis applied to launch vehicle configuration
NASA Technical Reports Server (NTRS)
Ryan, R.; Verderaime, V.
1993-01-01
As emphasis shifts from optimum-performance aerospace systems to least lift-cycle costs, systems designs must seek, adapt, and innovate cost improvement techniques in design through operations. The systems design process of concept, definition, and design was assessed for the types and flow of total quality management techniques that may be applicable in a launch vehicle systems design analysis. Techniques discussed are task ordering, quality leverage, concurrent engineering, Pareto's principle, robustness, quality function deployment, criteria, and others. These cost oriented techniques are as applicable to aerospace systems design analysis as to any large commercial system.
Disturbance Accommodating Adaptive Control with Application to Wind Turbines
NASA Technical Reports Server (NTRS)
Frost, Susan
2012-01-01
Adaptive control techniques are well suited to applications that have unknown modeling parameters and poorly known operating conditions. Many physical systems experience external disturbances that are persistent or continually recurring. Flexible structures and systems with compliance between components often form a class of systems that fail to meet standard requirements for adaptive control. For these classes of systems, a residual mode filter can restore the ability of the adaptive controller to perform in a stable manner. New theory will be presented that enables adaptive control with accommodation of persistent disturbances using residual mode filters. After a short introduction to some of the control challenges of large utility-scale wind turbines, this theory will be applied to a high-fidelity simulation of a wind turbine.
A New Adaptive Framework for Collaborative Filtering Prediction
Almosallam, Ibrahim A.; Shang, Yi
2010-01-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924
A New Adaptive Framework for Collaborative Filtering Prediction.
Almosallam, Ibrahim A; Shang, Yi
2008-06-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.
CCD filter and transform techniques for interference excision
NASA Technical Reports Server (NTRS)
Borsuk, G. M.; Dewitt, R. N.
1976-01-01
The theoretical and some experimental results of a study aimed at applying CCD filter and transform techniques to the problem of interference excision within communications channels were presented. Adaptive noise (interference) suppression was achieved by the modification of received signals such that they were orthogonal to the recently measured noise field. CCD techniques were examined to develop real-time noise excision processing. They were recursive filters, circulating filter banks, transversal filter banks, an optical implementation of the chirp Z transform, and a CCD analog FFT.
Ponsford, Anthony; McKerracher, Rick; Ding, Zhen; Moo, Peter; Yee, Derek
2017-07-07
Canada's third-generation HFSWR forms the foundation of a maritime domain awareness system that provides enforcement agencies with real-time persistent surveillance out to and beyond the 200 nautical mile exclusive economic zone (EEZ). Cognitive sense-and-adapt technology and dynamic spectrum management ensures robust and resilient operation in the highly congested High Frequency (HF) band. Dynamic spectrum access enables the system to simultaneously operate on two frequencies on a non-interference and non-protected basis, without impacting other spectrum users. Sense-and-adapt technologies ensure that the system instantaneously switches to a new vacant channel on the detection of another user or unwanted jamming signal. Adaptive signal processing techniques mitigate against electrical noise, interference and clutter. Sense-and-adapt techniques applied at the detector and tracker stages maximize the probability of track initiation whilst minimizing the probability of false or otherwise erroneous track data.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
Entropy-based adaptive attitude estimation
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.
2018-03-01
Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.
Ponsford, Anthony; McKerracher, Rick; Ding, Zhen; Moo, Peter; Yee, Derek
2017-01-01
Canada’s third-generation HFSWR forms the foundation of a maritime domain awareness system that provides enforcement agencies with real-time persistent surveillance out to and beyond the 200 nautical mile exclusive economic zone (EEZ). Cognitive sense-and-adapt technology and dynamic spectrum management ensures robust and resilient operation in the highly congested High Frequency (HF) band. Dynamic spectrum access enables the system to simultaneously operate on two frequencies on a non-interference and non-protected basis, without impacting other spectrum users. Sense-and-adapt technologies ensure that the system instantaneously switches to a new vacant channel on the detection of another user or unwanted jamming signal. Adaptive signal processing techniques mitigate against electrical noise, interference and clutter. Sense-and-adapt techniques applied at the detector and tracker stages maximize the probability of track initiation whilst minimizing the probability of false or otherwise erroneous track data. PMID:28686198
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
An efficient implementation of Forward-Backward Least-Mean-Square Adaptive Line Enhancers
NASA Technical Reports Server (NTRS)
Yeh, H.-G.; Nguyen, T. M.
1995-01-01
An efficient implementation of the forward-backward least-mean-square (FBLMS) adaptive line enhancer is presented in this article. Without changing the characteristics of the FBLMS adaptive line enhancer, the proposed implementation technique reduces multiplications by 25% and additions by 12.5% in two successive time samples in comparison with those operations of direct implementation in both prediction and weight control. The proposed FBLMS architecture and algorithm can be applied to digital receivers for enhancing signal-to-noise ratio to allow fast carrier acquisition and tracking in both stationary and nonstationary environments.
Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-04
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.
NASA Astrophysics Data System (ADS)
Steinbock, Michael J.; Hyde, Milo W.
2012-10-01
Adaptive optics is used in applications such as laser communication, remote sensing, and laser weapon systems to estimate and correct for atmospheric distortions of propagated light in real-time. Within an adaptive optics system, a reconstruction process interprets the raw wavefront sensor measurements and calculates an estimate for the unwrapped phase function to be sent through a control law and applied to a wavefront correction device. This research is focused on adaptive optics using a self-referencing interferometer wavefront sensor, which directly measures the wrapped wavefront phase. Therefore, its measurements must be reconstructed for use on a continuous facesheet deformable mirror. In testing and evaluating a novel class of branch-point- tolerant wavefront reconstructors based on the post-processing congruence operation technique, an increase in Strehl ratio compared to a traditional least squares reconstructor was noted even in non-scintillated fields. To investigate this further, this paper uses wave-optics simulations to eliminate many of the variables from a hardware adaptive optics system, so as to focus on the reconstruction techniques alone. The simulation results along with a discussion of the physical reasoning for this phenomenon are provided. For any applications using a self-referencing interferometer wavefront sensor with low signal levels or high localized wavefront gradients, understanding this phenomena is critical when applying a traditional least squares wavefront reconstructor.
Complex adaptive systems: A new approach for understanding health practices.
Gomersall, Tim
2018-06-22
This article explores the potential of complex adaptive systems theory to inform behaviour change research. A complex adaptive system describes a collection of heterogeneous agents interacting within a particular context, adapting to each other's actions. In practical terms, this implies that behaviour change is 1) socially and culturally situated; 2) highly sensitive to small baseline differences in individuals, groups, and intervention components; and 3) determined by multiple components interacting "chaotically". Two approaches to studying complex adaptive systems are briefly reviewed. Agent-based modelling is a computer simulation technique that allows researchers to investigate "what if" questions in a virtual environment. Applied qualitative research techniques, on the other hand, offer a way to examine what happens when an intervention is pursued in real-time, and to identify the sorts of rules and assumptions governing social action. Although these represent very different approaches to complexity, there may be scope for mixing these methods - for example, by grounding models in insights derived from qualitative fieldwork. Finally, I will argue that the concept of complex adaptive systems offers one opportunity to gain a deepened understanding of health-related practices, and to examine the social psychological processes that produce health-promoting or damaging actions.
Kumar, M; Mishra, S K
2017-01-01
The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.
Sizing up Asteroids at Lick Observatory with Adaptive Optics
NASA Astrophysics Data System (ADS)
Drummond, Jack D.; Christou, J.
2006-12-01
Using the Shane 3 meter telescope with adaptive optics at Lick Observatory, we have determined the triaxial dimensions and rotational poles of five asteroids, 3 Juno, 4 Vesta, 16 Psyche, 87 Sylvia, and 324 Bamberga. Parametric blind deconvolution was applied to images obtained mostly at 2.5 microns in 2004 and 2006. This is the first time Bamberga’s pole has been determined, and the results for the other four asteroids are in agreement with the analysis of decades of lightcurves by others. The techniques developed here to find sizes, shapes, and poles, in only one or two nights, can be applied to smaller asteroids that are resolved with larger telescopes.
Seismic migration in generalized coordinates
NASA Astrophysics Data System (ADS)
Arias, C.; Duque, L. F.
2017-06-01
Reverse time migration (RTM) is a technique widely used nowadays to obtain images of the earth’s sub-surface, using artificially produced seismic waves. This technique has been developed for zones with flat surface and when applied to zones with rugged topography some corrections must be introduced in order to adapt it. This can produce defects in the final image called artifacts. We introduce a simple mathematical map that transforms a scenario with rugged topography into a flat one. The three steps of the RTM can be applied in a way similar to the conventional ones just by changing the Laplacian in the acoustic wave equation for a generalized one. We present a test of this technique using the Canadian foothills SEG velocity model.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Trust Management in Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.; Haack, Jereme N.; Fink, Glenn A.
2009-07-07
Reputation-based trust management techniques can address issues such as insider threat as well as quality of service issues that may be malicious in nature. However, trust management techniques must be adapted to the unique needs of the architectures and problem domains to which they are applied. Certain characteristics of swarms such as their lightweight ephemeral nature and indirect communication make this adaptation especially challenging. In this paper we look at the trust issues and opportunities in mobile agent swarm-based autonomic systems and find that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust managementmore » problem becomes much more scalable and still serves to protect the swarms. We also analyze the applicability of trust management research as it has been applied to architectures with similar characteristics. Finally, we specify required characteristics for trust management mechanisms to be used to monitor the trustworthiness of the entities in a swarm-based autonomic computing system.« less
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
Singh, Sheetal; Shih, Shyh-Jen; Vaughan, Andrew T M
2014-01-01
Current techniques for examining the global creation and repair of DNA double-strand breaks are restricted in their sensitivity, and such techniques mask any site-dependent variations in breakage and repair rate or fidelity. We present here a system for analyzing the fate of documented DNA breaks, using the MLL gene as an example, through application of ligation-mediated PCR. Here, a simple asymmetric double-stranded DNA adapter molecule is ligated to experimentally induced DNA breaks and subjected to seminested PCR using adapter- and gene-specific primers. The rate of appearance and loss of specific PCR products allows detection of both the break and its repair. Using the additional technique of inverse PCR, the presence of misrepaired products (translocations) can be detected at the same site, providing information on the fidelity of the ligation reaction in intact cells. Such techniques may be adapted for the analysis of DNA breaks and rearrangements introduced into any identifiable genomic location. We have also applied parallel sequencing for the high-throughput analysis of inverse PCR products to facilitate the unbiased recording of all rearrangements located at a specific genomic location.
Application and evaluation of a combination of socratice and learning through discussion techniques.
van Aswegen, E J; Brink, H I; Steyn, P J
2001-11-01
This article has its genesis in the inquirer's interest in the need for internalizing critical thinking, creative thinking and reflective skills in adult learners. As part of a broader study the inquirer used a combination of two techniques over a period of nine months, namely: Socratic discussion/questioning and Learning Through Discussion Technique. The inquirer within this inquiry elected mainly qualitative methods, because they were seen as more adaptable to dealing with multiple realities and more sensitive and adaptable to the many shaping influences and value patterns that may be encountered (Lincoln & Guba, 1989). Purposive sampling was used and sample size (n = 10) was determined by the willingness of potential participants to enlist in the chosen techniques. Feedback from participants was obtained: (1) verbally after each discussion session, and (2) in written format after completion of the course content. The final/summative evaluation was obtained through a semi-structured questionnaire. This was deemed necessary, in that the participants were already studying for the end of the year examination. For the purpose of this condensed report the inquirer reflected only on the feedback obtained with the help of the questionnaire. The empirical study showed that in spite of various adaptation problems experienced, eight (8) of the ten (10) participants felt positive toward the applied techniques.
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong
2018-01-01
Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
NASA Astrophysics Data System (ADS)
Liu, Jian; Xu, Rui
2018-04-01
Chaotic synchronisation has caused extensive attention due to its potential application in secure communication. This paper is concerned with the problem of adaptive synchronisation for two different kinds of memristor-based neural networks with time delays in leakage terms. By applying set-valued maps and differential inclusions theories, synchronisation criteria are obtained via linear matrix inequalities technique, which guarantee drive system being synchronised with response system under adaptive control laws. Finally, a numerical example is given to illustrate the feasibility of our theoretical results, and two schemes for secure communication are introduced based on chaotic masking method.
NASA Astrophysics Data System (ADS)
Ripamonti, Francesco; Resta, Ferruccio; Borroni, Massimo; Cazzulani, Gabriele
2014-04-01
A new method for the real-time identification of mechanical system modal parameters is used in order to design different adaptive control logics aiming to reduce the vibrations in a carbon fiber plate smart structure. It is instrumented with three piezoelectric actuators, three accelerometers and three strain gauges. The real-time identification is based on a recursive subspace tracking algorithm whose outputs are elaborated by an ARMA model. A statistical approach is finally applied to choose the modal parameter correct values. These are given in input to model-based control logics such as a gain scheduling and an adaptive LQR control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunsman, David Marvin; Aldemir, Tunc; Rutt, Benjamin
2008-05-01
This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accidentmore » progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.« less
Deriving amplitude equations for weakly-nonlinear oscillators and their generalizations
NASA Astrophysics Data System (ADS)
O'Malley, Robert E., Jr.; Williams, David B.
2006-06-01
Results by physicists on renormalization group techniques have recently sparked interest in the singular perturbations community of applied mathematicians. The survey paper, [Phys. Rev. E 54(1) (1996) 376-394], by Chen et al. demonstrated that many problems which applied mathematicians solve using disparate methods can be solved using a single approach. Analysis of that renormalization group method by Mudavanhu and O'Malley [Stud. Appl. Math. 107(1) (2001) 63-79; SIAM J. Appl. Math. 63(2) (2002) 373-397], among others, indicates that the technique can be streamlined. This paper carries that analysis several steps further to present an amplitude equation technique which is both well adapted for use with a computer algebra system and easy to relate to the classical methods of averaging and multiple scales.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-13
... individuals with disabilities; selecting, designing, fitting, customizing, adapting, applying, maintaining... specially designed services for an infant or toddler with a disability and the family of such infant or... impartial mediator who is trained in effective mediation techniques. Medical services. Those evaluative...
ERIC Educational Resources Information Center
Hensen, Cory; Clare, Tami Lasseter; Barbera, Jack
2018-01-01
Fluorescence spectroscopy experiments are a frequently taught as part of upper-division teaching laboratories. To expose undergraduate students to an applied fluorescence technique, a corrosion detection method, using quenching, was adapted from authentic research for an instrumental analysis laboratory. In the experiment, students acquire…
Lessons from historical rangeland revegetation for today's restoration
Bruce A. Roundy
1999-01-01
Rangeland revegetation in the Western United States historically was applied at a large scale for soil conservation and forage production purposes. Principles of revegetation that have developed over years of research include matching site potential and plant materials adaption, use of appropriate seedbed preparation and sowing techniques, and development of large...
Xu, Zihao; Yang, Chengliang; Zhang, Peiguang; Zhang, Xingyun; Cao, Zhaoliang; Mu, Quanquan; Sun, Qiang; Xuan, Li
2017-08-30
There are more than eight large aperture telescopes (larger than eight meters) equipped with adaptive optics system in the world until now. Due to the limitations such as the difficulties of increasing actuator number of deformable mirror, most of them work in the infrared waveband. A novel two-step high-resolution optical imaging approach is proposed by applying phase diversity (PD) technique to the open-loop liquid crystal adaptive optics system (LC AOS) for visible light high-resolution adaptive imaging. Considering the traditional PD is not suitable for LC AOS, the novel PD strategy is proposed which can reduce the wavefront estimating error caused by non-modulated light generated by liquid crystal spatial light modulator (LC SLM) and make the residual distortions after open-loop correction to be smaller. Moreover, the LC SLM can introduce any aberration which realizes the free selection of phase diversity. The estimating errors are greatly reduced in both simulations and experiments. The resolution of the reconstructed image is greatly improved on both subjective visual effect and the highest discernible space resolution. Such technique can be widely used in large aperture telescopes for astronomical observations such as terrestrial planets, quasars and also can be used in other applications related to wavefront correction.
NASA Astrophysics Data System (ADS)
Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui
2012-04-01
Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.
Control allocation-based adaptive control for greenhouse climate
NASA Astrophysics Data System (ADS)
Su, Yuanping; Xu, Lihong; Goodman, Erik D.
2018-04-01
This paper presents an adaptive approach to greenhouse climate control, as part of an integrated control and management system for greenhouse production. In this approach, an adaptive control algorithm is first derived to guarantee the asymptotic convergence of the closed system with uncertainty, then using that control algorithm, a controller is designed to satisfy the demands for heat and mass fluxes to maintain inside temperature, humidity and CO2 concentration at their desired values. Instead of applying the original adaptive control inputs directly, second, a control allocation technique is applied to distribute the demands of the heat and mass fluxes to the actuators by minimising tracking errors and energy consumption. To find an energy-saving solution, both single-objective optimisation (SOO) and multiobjective optimisation (MOO) in the control allocation structure are considered. The advantage of the proposed approach is that it does not require any a priori knowledge of the uncertainty bounds, and the simulation results illustrate the effectiveness of the proposed control scheme. It also indicates that MOO saves more energy in the control process.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation
NASA Astrophysics Data System (ADS)
Sleesongsom, S.; Bureerat, S.
2018-03-01
This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.
Chaffin, Brian C; Shuster, William D; Garmestani, Ahjond S; Furio, Brooke; Albro, Sandra L; Gardiner, Mary; Spring, MaLisa; Green, Olivia Odom
2016-12-01
Green infrastructure installations such as rain gardens and bioswales are increasingly regarded as viable tools to mitigate stormwater runoff at the parcel level. The use of adaptive management to implement and monitor green infrastructure projects as experimental attempts to manage stormwater has not been adequately explored as a way to optimize green infrastructure performance or increase social and political acceptance. Efforts to improve stormwater management through green infrastructure suffer from the complexity of overlapping jurisdictional boundaries, as well as interacting social and political forces that dictate the flow, consumption, conservation and disposal of urban wastewater flows. Within this urban milieu, adaptive management-rigorous experimentation applied as policy-can inform new wastewater management techniques such as the implementation of green infrastructure projects. In this article, we present a narrative of scientists and practitioners working together to apply an adaptive management approach to green infrastructure implementation for stormwater management in Cleveland, Ohio. In Cleveland, contextual legal requirements and environmental factors created an opportunity for government researchers, stormwater managers and community organizers to engage in the development of two distinct sets of rain gardens, each borne of unique social, economic and environmental processes. In this article we analyze social and political barriers to applying adaptive management as a framework for implementing green infrastructure experiments as policy. We conclude with a series of lessons learned and a reflection on the prospects for adaptive management to facilitate green infrastructure implementation for improved stormwater management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pasma, J. H.; Schouten, A. C.; Aarts, R. G. K. M.; Meskers, C. G. M.; Maier, A. B.; van der Kooij, H.
2015-01-01
Standing balance requires multijoint coordination between the ankles and hips. We investigated how humans adapt their multijoint coordination to adjust to various conditions and whether the adaptation differed between healthy young participants and healthy elderly. Balance was disturbed by push/pull rods, applying two continuous and independent force disturbances at the level of the hip and between the shoulder blades. In addition, external force fields were applied, represented by an external stiffness at the hip, either stabilizing or destabilizing the participants' balance. Multivariate closed-loop system-identification techniques were used to describe the neuromuscular control mechanisms by quantifying the corrective joint torques as a response to body sway, represented by frequency response functions (FRFs). Model fits on the FRFs resulted in an estimation of time delays, intrinsic stiffness, reflexive stiffness, and reflexive damping of both the ankle and hip joint. The elderly generated similar corrective joint torques but had reduced body sway compared with the young participants, corresponding to the increased FRF magnitude with age. When a stabilizing or destabilizing external force field was applied at the hip, both young and elderly participants adapted their multijoint coordination by lowering or respectively increasing their neuromuscular control actions around the ankles, expressed in a change of FRF magnitude. However, the elderly adapted less compared with the young participants. Model fits on the FRFs showed that elderly had higher intrinsic and reflexive stiffness of the ankle, together with higher time delays of the hip. Furthermore, the elderly adapted their reflexive stiffness around the ankle joint less compared with young participants. These results imply that elderly were stiffer and were less able to adapt to external force fields. PMID:26719084
Turbulent Output-Based Anisotropic Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Carlson, Jan-Renee
2010-01-01
Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.
Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems
NASA Astrophysics Data System (ADS)
Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.
2018-05-01
Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.
Azarnivand, Ali; Chitsaz, Nastaran
2015-02-01
Most of the arid and semi-arid regions are located in the developing countries, while the availability of water in adequate quantity and quality is an essential condition to approach sustainable development. In this research, "enhanced Driving force-Pressure-State-Impact-Response (eDPSIR)" sustainability framework was applied to deal with water shortage in Yazd, an arid province of Iran. Then, the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique was integrated into the driven components of eDPSIR, to quantify the inter-linkages among fundamental anthropogenic indicators (i.e. causes and effects). The paper's structure included: (1) identifying the indicators of DPSIR along with structuring eDPSIR causal networks, (2) using the DEMATEL technique to evaluate the inter-relationships among the causes and effects along with determining the key indicators, (3) decomposing the problem into a system of hierarchies, (4) employing the analytic hierarchy process (AHP) technique to evaluate the weight of each criterion, and (5) applying complex proportional assessment with Grey interval numbers (COPRAS-G) method to obtain the most conclusive adaptive policy response. The systematic quantitative analysis of causes and effects revealed that the root sources of water shortage in the study area were the weak enforcement of law and regulations, decline of available freshwater resources for development, and desertification consequences. According to the results, mitigating the water shortage in Yazd could be feasible by implementation of such key adaptive policy-responses as providing effective law enforcement, updating the standards and regulations, providing social learning, and boosting stakeholders' collaboration.
Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT
ERIC Educational Resources Information Center
Cheng, Ying; Chang, Hua-Hua; Yi, Qing
2007-01-01
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
The Amateur Zoologist: Explorations and Investigations. Amateur Science Series.
ERIC Educational Resources Information Center
Dykstra, Mary
This book contains over 30 investigations and activities that can be used or adapted for science fair projects. It outlines basic techniques and procedures that can be applied to zoological investigations. Projects and activities described include: finding out how many different kinds of insects and other arthropods live in nearby fields and…
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Multichannel-Hadamard calibration of high-order adaptive optics systems.
Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai
2014-06-02
we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.
SWAT system performance predictions
NASA Astrophysics Data System (ADS)
Parenti, Ronald R.; Sasiela, Richard J.
1993-03-01
In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.
Soft computing methods for geoidal height transformation
NASA Astrophysics Data System (ADS)
Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.
2009-07-01
Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Xia, Kewei; Huo, Wei
2016-05-01
This paper presents a robust adaptive neural networks control strategy for spacecraft rendezvous and docking with the coupled position and attitude dynamics under input saturation. Backstepping technique is applied to design a relative attitude controller and a relative position controller, respectively. The dynamics uncertainties are approximated by radial basis function neural networks (RBFNNs). A novel switching controller consists of an adaptive neural networks controller dominating in its active region combined with an extra robust controller to avoid invalidation of the RBFNNs destroying stability of the system outside the neural active region. An auxiliary signal is introduced to compensate the input saturation with anti-windup technique, and a command filter is employed to approximate derivative of the virtual control in the backstepping procedure. Globally uniformly ultimately bounded of the relative states is proved via Lyapunov theory. Simulation example demonstrates effectiveness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
NASA Astrophysics Data System (ADS)
Sun, Huafei; Darmofal, David L.
2014-12-01
In this paper we propose a new high-order solution framework for interface problems on non-interface-conforming meshes. The framework consists of a discontinuous Galerkin (DG) discretization, a simplex cut-cell technique, and an output-based adaptive scheme. We first present a DG discretization with a dual-consistent output evaluation for elliptic interface problems on interface-conforming meshes, and then extend the method to handle multi-physics interface problems, in particular conjugate heat transfer (CHT) problems. The method is then applied to non-interface-conforming meshes using a cut-cell technique, where the interface definition is completely separate from the mesh generation process. No assumption is made on the interface shape (other than Lipschitz continuity). We then equip our strategy with an output-based adaptive scheme for an accurate output prediction. Through numerical examples, we demonstrate high-order convergence for elliptic interface problems and CHT problems with both smooth and non-smooth interface shapes.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Putt, C. W.; Giamati, C. C.
1981-01-01
Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.
NASA Technical Reports Server (NTRS)
1974-01-01
A handbook that explains the basic Delphi methodology and discusses modified Delphi techniques is presented. The selection of communications experts to participate in a study, the construction of questionnaires on potential communications developments, and requisite technology is treated. No two modified Delphi studies were the same, which reflects the flexibility and adaptability of the technique. Each study must be specifically tailored to a particular case, and consists of seeking a consensus of opinion among experts about a particular subject and attendant conditions that may prevail in the future.
An image filtering technique for SPIDER visible tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonnesu, N., E-mail: nicola.fonnesu@igi.cnr.it; Agostini, M.; Brombin, M.
2014-02-15
The tomographic diagnostic developed for the beam generated in the SPIDER facility (100 keV, 50 A prototype negative ion source of ITER neutral beam injector) will characterize the two-dimensional particle density distribution of the beam. The simulations described in the paper show that instrumental noise has a large influence on the maximum achievable resolution of the diagnostic. To reduce its impact on beam pattern reconstruction, a filtering technique has been adapted and implemented in the tomography code. This technique is applied to the simulated tomographic reconstruction of the SPIDER beam, and the main results are reported.
Regression techniques for oceanographic parameter retrieval using space-borne microwave radiometry
NASA Technical Reports Server (NTRS)
Hofer, R.; Njoku, E. G.
1981-01-01
Variations of conventional multiple regression techniques are applied to the problem of remote sensing of oceanographic parameters from space. The techniques are specifically adapted to the scanning multichannel microwave radiometer (SMRR) launched on the Seasat and Nimbus 7 satellites to determine ocean surface temperature, wind speed, and atmospheric water content. The retrievals are studied primarily from a theoretical viewpoint, to illustrate the retrieval error structure, the relative importances of different radiometer channels, and the tradeoffs between spatial resolution and retrieval accuracy. Comparisons between regressions using simulated and actual SMMR data are discussed; they show similar behavior.
NASA Astrophysics Data System (ADS)
Song, Qi; Song, Y. D.; Cai, Wenchuan
2011-09-01
Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.
NASA Technical Reports Server (NTRS)
Balas, Mark; Frost, Susan
2012-01-01
Flexible structures containing a large number of modes can benefit from adaptive control techniques which are well suited to applications that have unknown modeling parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend our adaptive control theory to accommodate troublesome modal subsystems of a plant that might inhibit the adaptive controller. In some cases the plant does not satisfy the requirements of Almost Strict Positive Realness. Instead, there maybe be a modal subsystem that inhibits this property. This section will present new results for our adaptive control theory. We will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for the troublesome modal subsystem, or the Q modes. Here we present the theory for adaptive controllers modified by RMFs, with attention to the issue of disturbances propagating through the Q modes. We apply the theoretical results to a flexible structure example to illustrate the behavior with and without the residual mode filter.
2011-08-01
component in an airborne platform. Authorized licensed use limited to: ROME AFB. Downloaded on August 05,2010 at 14:47:37 UTC from IEEE Xplore ...UTC from IEEE Xplore . Restrictions apply. 2 . (7) For instance, it is not difficult to show that the MGF of for Nakagami-m fading with i.n.d fading...August 05,2010 at 14:47:37 UTC from IEEE Xplore . Restrictions apply. 3 ditions. Once again, using integration by parts, (14) can be concisely expressed
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.
Hachaj, Tomasz; Ogiela, Marek R
2016-06-01
The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment.
Matías, J M; Taboada, J; Ordóñez, C; Nieto, P G
2007-08-17
This article describes a methodology to model the degree of remedial action required to make short stretches of a roadway suitable for dangerous goods transport (DGT), particularly pollutant substances, using different variables associated with the characteristics of each segment. Thirty-one factors determining the impact of an accident on a particular stretch of road were identified and subdivided into two major groups: accident probability factors and accident severity factors. Given the number of factors determining the state of a particular road segment, the only viable statistical methods for implementing the model were machine learning techniques, such as multilayer perceptron networks (MLPs), classification trees (CARTs) and support vector machines (SVMs). The results produced by these techniques on a test sample were more favourable than those produced by traditional discriminant analysis, irrespective of whether dimensionality reduction techniques were applied. The best results were obtained using SVMs specifically adapted to ordinal data. This technique takes advantage of the ordinal information contained in the data without penalising the computational load. Furthermore, the technique permits the estimation of the utility function that is latent in expert knowledge.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection
NASA Astrophysics Data System (ADS)
Li, Gang; McDonald, Geoff L.; Zhao, Qing
2017-01-01
This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1985-03-01
A report is offered on a study of the information activities within the Right-of-Way section of ADOT. The objectives of the study were to adapt and apply techniques to measure user-perceived needs, satisfaction and utility of services provided Right-...
DOT National Transportation Integrated Search
1985-03-01
A report is offered on a study of the information activities within the Right-of-Way section of ADOT. The objectives of the study were to adapt and apply techniques to measure user-perceived needs, satisfaction and utility of services provided Right-...
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.
Bortolotto, Tissiana; Melian, Karla; Krejci, Ivo
2013-10-01
The present study attempted to find a simple direct adhesive restorative technique for the restoration of Class 2 cavities. A self-etch adhesive system with a dual-cured core buildup composite resin (paste 1 + paste 2) was evaluated in its ability to restore proximo-occlusal cavities with margins located on enamel and dentin. The groups were: A, cavity filling (cf) with paste 1 (light-curing component) by using a layering technique; B, cf by mixing both pastes, bulk insertion, and dual curing; and C, cf by mixing both pastes, bulk insertion, and chemical curing. Two control groups (D, negative, bulk; and E, positive, layering technique) were included by restoring cavities with a classic three-step etch-and-rinse adhesive and a universal restorative composite resin. SEM margin analysis was performed before and after thermomechanical loading in a chewing simulator. Percentages (mean ± SD) of "continuous margins" were improved by applying the material in bulk and letting it self cure (54 ± 6) or dual cure (59 ± 9), and no significant differences were observed between these two groups and the positive control (44 ± 19). The present study showed that the dual-cured composite resin tested has the potential to be used as bulk filling material for Class 2 restorations. When used as filling materials, dual-cure composite resins placed in bulk can provide marginal adaptation similar to light-cured composites applied with a complex stratification technique.
Adaptive laboratory evolution -- principles and applications for biotechnology.
Dragosits, Martin; Mattanovich, Diethard
2013-07-01
Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering.
Adaptive Wavelet Coding Applied in a Wireless Control System.
Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O
2017-12-13
Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.
Imbalanced Learning for Functional State Assessment
NASA Technical Reports Server (NTRS)
Li, Feng; McKenzie, Frederick; Li, Jiang; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom
2011-01-01
This paper presents results of several imbalanced learning techniques applied to operator functional state assessment where the data is highly imbalanced, i.e., some function states (majority classes) have much more training samples than other states (minority classes). Conventional machine learning techniques usually tend to classify all data samples into majority classes and perform poorly for minority classes. In this study, we implemented five imbalanced learning techniques, including random undersampling, random over-sampling, synthetic minority over-sampling technique (SMOTE), borderline-SMOTE and adaptive synthetic sampling (ADASYN) to solve this problem. Experimental results on a benchmark driving lest dataset show thai accuracies for minority classes could be improved dramatically with a cost of slight performance degradations for majority classes,
A comparison between different coronagraphic data reduction techniques
NASA Astrophysics Data System (ADS)
Carolo, E.; Vassallo, D.; Farinato, J.; Bergomi, M.; Bonavita, M.; Carlotti, A.; D'Orazi, V.; Greggio, D.; Magrin, D.; Mesa, D.; Pinna, E.; Puglisi, A.; Stangalini, M.; Verinaud, C.; Viotto, V.
2016-07-01
A robust post processing technique is mandatory for analysing the coronagraphic high contrast imaging data. Angular Differential Imaging (ADI) and Principal Component Analysis (PCA) are the most used approaches to suppress the quasi-static structure presents in the Point Spread Function (PSF) for revealing planets at different separations from the host star. In this work, we present the comparison between ADI and PCA applied to System of coronagraphy with High order Adaptive optics from R to K band (SHARK-NIR), which will be implemented at Large Binocular Telescope (LBT). The comparison has been carried out by using as starting point the simulated wavefront residuals of the LBT Adaptive Optics (AO) system, in different observing conditions. Accurate tests for tuning the post processing parameters to obtain the best performance from each technique were performed in various seeing conditions (0:4"-1") for star magnitude ranging from 8 to 12, with particular care in finding the best compromise between quasi static speckle subtraction and planets detection.
A tale of two rain gardens: Barriers and bridges to adaptive ...
Green infrastructure installations such as rain gardens and bioswales are increasingly regarded as viable tools to mitigate stormwater runoff at the parcel level. The use of adaptive management to implement and monitor green infrastructure projects as experimental attempts to manage stormwater has not been adequately explored as a way to optimize green infrastructure performance or increase social and political acceptance. Efforts to improve stormwater management through green infrastructure suffer from the complexity of overlapping jurisdictional boundaries, as well as interacting social and political forces that dictate the flow, consumption, conservation and disposal of urban wastewater flows. Within this urban milieu, adaptive management—rigorous experimentation applied as policy—can inform new wastewater management techniques such as the implementation of green infrastructure projects. In this article, we present a narrative of scientists and practitioners working together to apply an adaptive management approach to green infrastructure implementation for stormwater management in Cleveland, Ohio. In Cleveland, contextual legal requirements and environmental factors created an opportunity for government researchers, stormwater managers and community organizers to engage in the development of two distinct sets of rain gardens, each borne of unique social, economic and environmental processes. In this article we analyze social and political barriers to app
Principles, Techniques, and Applications of Tissue Microfluidics
NASA Technical Reports Server (NTRS)
Wade, Lawrence A.; Kartalov, Emil P.; Shibata, Darryl; Taylor, Clive
2011-01-01
The principle of tissue microfluidics and its resultant techniques has been applied to cell analysis. Building microfluidics to suit a particular tissue sample would allow the rapid, reliable, inexpensive, highly parallelized, selective extraction of chosen regions of tissue for purposes of further biochemical analysis. Furthermore, the applicability of the techniques ranges beyond the described pathology application. For example, they would also allow the posing and successful answering of new sets of questions in many areas of fundamental research. The proposed integration of microfluidic techniques and tissue slice samples is called "tissue microfluidics" because it molds the microfluidic architectures in accordance with each particular structure of each specific tissue sample. Thus, microfluidics can be built around the tissues, following the tissue structure, or alternatively, the microfluidics can be adapted to the specific geometry of particular tissues. By contrast, the traditional approach is that microfluidic devices are structured in accordance with engineering considerations, while the biological components in applied devices are forced to comply with these engineering presets.
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Aircraft operability methods applied to space launch vehicles
NASA Astrophysics Data System (ADS)
Young, Douglas
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
On residual stresses and homeostasis: an elastic theory of functional adaptation in living matter.
Ciarletta, P; Destrade, M; Gower, A L
2016-04-26
Living matter can functionally adapt to external physical factors by developing internal tensions, easily revealed by cutting experiments. Nonetheless, residual stresses intrinsically have a complex spatial distribution, and destructive techniques cannot be used to identify a natural stress-free configuration. This work proposes a novel elastic theory of pre-stressed materials. Imposing physical compatibility and symmetry arguments, we define a new class of free energies explicitly depending on the internal stresses. This theory is finally applied to the study of arterial remodelling, proving its potential for the non-destructive determination of the residual tensions within biological materials.
Sandwich Structure Risk Reduction in Support of the Payload Adapter Fitting
NASA Technical Reports Server (NTRS)
Nettles, A. T.; Jackson, J. R.; Guin, W. E.
2018-01-01
Reducing risk for utilizing honeycomb sandwich structure for the Space Launch System payload adapter fitting includes determining what parameters need to be tested for damage tolerance to ensure a safe structure. Specimen size and boundary conditions are the most practical parameters to use in damage tolerance inspection. The effect of impact over core splices and foreign object debris between the facesheet and core is assessed. Effects of enhanced damage tolerance by applying an outer layer of carbon fiber woven cloth is examined. A simple repair technique for barely visible impact damage that restores all compression strength is presented.
Optimal model-based sensorless adaptive optics for epifluorescence microscopy.
Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel
2018-01-01
We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
2006-01-01
Borrowing from techniques developed for conservation law equations, we have developed both monotone and higher order accurate numerical schemes which discretize the Hamilton-Jacobi and level set equations on triangulated domains. The use of unstructured meshes containing triangles (2D) and tetrahedra (3D) easily accommodates mesh adaptation to resolve disparate level set feature scales with a minimal number of solution unknowns. The minisymposium talk will discuss these algorithmic developments and present sample calculations using our adaptive triangulation algorithm applied to various moving interface problems such as etching, deposition, and curvature flow.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K
2008-01-01
Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.
NASA Astrophysics Data System (ADS)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Morgan, Jessica I. W.
2016-01-01
Purpose Over the past 25 years, optical coherence tomography (OCT) and adaptive optics (AO) ophthalmoscopy have revolutionised our ability to non-invasively observe the living retina. The purpose of this review is to highlight the techniques and human clinical applications of recent advances in OCT and adaptive optics scanning laser/light ophthalmoscopy (AOSLO) ophthalmic imaging. Recent findings Optical coherence tomography retinal and optic nerve head (ONH) imaging technology allows high resolution in the axial direction resulting in cross-sectional visualisation of retinal and ONH lamination. Complementary AO ophthalmoscopy gives high resolution in the transverse direction resulting in en face visualisation of retinal cell mosaics. Innovative detection schemes applied to OCT and AOSLO technologies (such as spectral domain OCT, OCT angiography, confocal and non-confocal AOSLO, fluorescence, and AO-OCT) have enabled high contrast between retinal and ONH structures in three dimensions and have allowed in vivo retinal imaging to approach that of histological quality. In addition, both OCT and AOSLO have shown the capability to detect retinal reflectance changes in response to visual stimuli, paving the way for future studies to investigate objective biomarkers of visual function at the cellular level. Increasingly, these imaging techniques are being applied to clinical studies of the normal and diseased visual system. Summary Optical coherence tomography and AOSLO technologies are capable of elucidating the structure and function of the retina and ONH noninvasively with unprecedented resolution and contrast. The techniques have proven their worth in both basic science and clinical applications and each will continue to be utilised in future studies for many years to come. PMID:27112222
Morgan, Jessica I W
2016-05-01
Over the past 25 years, optical coherence tomography (OCT) and adaptive optics (AO) ophthalmoscopy have revolutionised our ability to non-invasively observe the living retina. The purpose of this review is to highlight the techniques and human clinical applications of recent advances in OCT and adaptive optics scanning laser/light ophthalmoscopy (AOSLO) ophthalmic imaging. Optical coherence tomography retinal and optic nerve head (ONH) imaging technology allows high resolution in the axial direction resulting in cross-sectional visualisation of retinal and ONH lamination. Complementary AO ophthalmoscopy gives high resolution in the transverse direction resulting in en face visualisation of retinal cell mosaics. Innovative detection schemes applied to OCT and AOSLO technologies (such as spectral domain OCT, OCT angiography, confocal and non-confocal AOSLO, fluorescence, and AO-OCT) have enabled high contrast between retinal and ONH structures in three dimensions and have allowed in vivo retinal imaging to approach that of histological quality. In addition, both OCT and AOSLO have shown the capability to detect retinal reflectance changes in response to visual stimuli, paving the way for future studies to investigate objective biomarkers of visual function at the cellular level. Increasingly, these imaging techniques are being applied to clinical studies of the normal and diseased visual system. Optical coherence tomography and AOSLO technologies are capable of elucidating the structure and function of the retina and ONH noninvasively with unprecedented resolution and contrast. The techniques have proven their worth in both basic science and clinical applications and each will continue to be utilised in future studies for many years to come. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
Three examples of applied remote sensing of vegetation
NASA Technical Reports Server (NTRS)
Rouse, J. W., Jr.; Benton, A. R., Jr.; Toler, R. W.; Haas, R. H.
1975-01-01
Cause studies in which remote sensing techniques were adapted to assist in the solution of particular problem situations in Texas involving vegetation are described. In each case, the final sensing technique developed for operational use by the concerned organizations employed photographic sensors which were optimized through studies of the spectral reflectance characteristics of the vegetation species and background conditions unique to the problem being considered. The three examples described are: (1) Assisting Aquatic Plant Monitoring and Control; (2) Improving Vegetation Utilization in Urban Planning; and (3) Enforcing the Quarantine of Diseased Crops.
Adaptive Control of Non-Minimum Phase Modal Systems Using Residual Mode Filters2. Parts 1 and 2
NASA Technical Reports Server (NTRS)
Balas, Mark J.; Frost, Susan
2011-01-01
Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. This paper will be divided into two parts. Here in Part I we will review the basic adaptive control approach and introduce the primary ideas. In Part II, we will present the RMF methodology and complete the proofs of all our results. Also, we will apply the above theoretical results to a simple flexible structure example to illustrate the behavior with and without the residual mode filter.
Strip Yield Model Numerical Application to Different Geometries and Loading Conditions
NASA Technical Reports Server (NTRS)
Hatamleh, Omar; Forman, Royce; Shivakumar, Venkataraman; Lyons, Jed
2006-01-01
A new numerical method based on the strip-yield analysis approach was developed for calculating the Crack Tip Opening Displacement (CTOD). This approach can be applied for different crack configurations having infinite and finite geometries, and arbitrary applied loading conditions. The new technique adapts the boundary element / dislocation density method to obtain crack-face opening displacements at any point on a crack, and succeeds by obtaining requisite values as a series of definite integrals, the functional parts of each being evaluated exactly in a closed form.
A Chebyshev matrix method for spatial modes of the Orr-Sommerfeld equation
NASA Technical Reports Server (NTRS)
Danabasoglu, G.; Biringen, S.
1989-01-01
The Chebyshev matrix collocation method is applied to obtain the spatial modes of the Orr-Sommerfeld equation for Poiseuille flow and the Blausius boundary layer. The problem is linearized by the companion matrix technique for semi-infinite domain using a mapping transformation. The method can be easily adapted to problems with different boundary conditions requiring different transformations.
NASA Astrophysics Data System (ADS)
Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi
2007-07-01
Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.
Extracting neuronal functional network dynamics via adaptive Granger causality analysis.
Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash
2018-04-24
Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
Adaptive neural network/expert system that learns fault diagnosis for different structures
NASA Astrophysics Data System (ADS)
Simon, Solomon H.
1992-08-01
Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.
A generic efficient adaptive grid scheme for rocket propulsion modeling
NASA Technical Reports Server (NTRS)
Mo, J. D.; Chow, Alan S.
1993-01-01
The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
NASA Astrophysics Data System (ADS)
Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.
1993-08-01
Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.
Robust, Practical Adaptive Control for Launch Vehicles
NASA Technical Reports Server (NTRS)
Orr, Jeb. S.; VanZwieten, Tannen S.
2012-01-01
A modern mechanization of a classical adaptive control concept is presented with an application to launch vehicle attitude control systems. Due to a rigorous flight certification environment, many adaptive control concepts are infeasible when applied to high-risk aerospace systems; methods of stability analysis are either intractable for high complexity models or cannot be reconciled in light of classical requirements. Furthermore, many adaptive techniques appearing in the literature are not suitable for application to conditionally stable systems with complex flexible-body dynamics, as is often the case with launch vehicles. The present technique is a multiplicative forward loop gain adaptive law similar to that used for the NASA X-15 flight research vehicle. In digital implementation with several novel features, it is well-suited to application on aerodynamically unstable launch vehicles with thrust vector control via augmentation of the baseline attitude/attitude-rate feedback control scheme. The approach is compatible with standard design features of autopilots for launch vehicles, including phase stabilization of lateral bending and slosh via linear filters. In addition, the method of assessing flight control stability via classical gain and phase margins is not affected under reasonable assumptions. The algorithm s ability to recover from certain unstable operating regimes can in fact be understood in terms of frequency-domain criteria. Finally, simulation results are presented that confirm the ability of the algorithm to improve performance and robustness in realistic failure scenarios.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
Musculoskeletal modelling in dogs: challenges and future perspectives.
Dries, Billy; Jonkers, Ilse; Dingemanse, Walter; Vanwanseele, Benedicte; Vander Sloten, Jos; van Bree, Henri; Gielen, Ingrid
2016-05-18
Musculoskeletal models have proven to be a valuable tool in human orthopaedics research. Recently, veterinary research started taking an interest in the computer modelling approach to understand the forces acting upon the canine musculoskeletal system. While many of the methods employed in human musculoskeletal models can applied to canine musculoskeletal models, not all techniques are applicable. This review summarizes the important parameters necessary for modelling, as well as the techniques employed in human musculoskeletal models and the limitations in transferring techniques to canine modelling research. The major challenges in future canine modelling research are likely to centre around devising alternative techniques for obtaining maximal voluntary contractions, as well as finding scaling factors to adapt a generalized canine musculoskeletal model to represent specific breeds and subjects.
Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I
2011-09-26
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Ellis, Maggie; Astell, Arlene
2017-01-01
Loss of verbal language production makes people with dementia appear unreachable. We previously presented a case study applying nonverbal communication techniques with a lady with dementia who could no longer speak, which we termed Adaptive Interaction. The current small-n study examines the applicability of Adaptive Interaction as a general tool for uncovering the communication repertoires of non-verbal individuals living with dementia. Communicative responses of 30 interaction sessions were coded and analysed in two conditions: Standard (Baseline) and Adaptive Interaction (Intervention). All participants retained the ability to interact plus a unique communication repertoire comprising a variety of nonverbal components, spanning eye gaze, emotion expression, and movement. In comparison to Baseline sessions, Intervention sessions were characterised by more smiling, looking at ME and imitation behaviour from the people with dementia. These findings allude to the potential of Adaptive Interaction as the basis for interacting with people living with dementia who can no longer speak.
Evolving RBF neural networks for adaptive soft-sensor design.
Alexandridis, Alex
2013-12-01
This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.
NASA Astrophysics Data System (ADS)
Ribeiro, Moisés V.
2004-12-01
This paper introduces adaptive fuzzy equalizers with variable step size for broadband power line (PL) communications. Based on delta-bar-delta and local Lipschitz estimation updating rules, feedforward, and decision feedback approaches, we propose singleton and nonsingleton fuzzy equalizers with variable step size to cope with the intersymbol interference (ISI) effects of PL channels and the hardness of the impulse noises generated by appliances and nonlinear loads connected to low-voltage power grids. The computed results show that the convergence rates of the proposed equalizers are higher than the ones attained by the traditional adaptive fuzzy equalizers introduced by J. M. Mendel and his students. Additionally, some interesting BER curves reveal that the proposed techniques are efficient for mitigating the above-mentioned impairments.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less
Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.
Kim, Eunwoo; Park, HyunWook
2017-02-01
The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.
An Unsupervised Approach for Extraction of Blood Vessels from Fundus Images.
Dash, Jyotiprava; Bhoi, Nilamani
2018-04-26
Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.
Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain,more » as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.« less
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Adaptive histogram equalization in digital radiography of destructive skeletal lesions.
Braunstein, E M; Capek, P; Buckwalter, K; Bland, P; Meyer, C R
1988-03-01
Adaptive histogram equalization, an image-processing technique that distributes pixel values of an image uniformly throughout the gray scale, was applied to 28 plain radiographs of bone lesions, after they had been digitized. The non-equalized and equalized digital images were compared by two skeletal radiologists with respect to lesion margins, internal matrix, soft-tissue mass, cortical breakthrough, and periosteal reaction. Receiver operating characteristic (ROC) curves were constructed on the basis of the responses. Equalized images were superior to nonequalized images in determination of cortical breakthrough and presence or absence of periosteal reaction. ROC analysis showed no significant difference in determination of margins, matrix, or soft-tissue masses.
Krause, Mark A
2015-07-01
Inquiry into evolutionary adaptations has flourished since the modern synthesis of evolutionary biology. Comparative methods, genetic techniques, and various experimental and modeling approaches are used to test adaptive hypotheses. In psychology, the concept of adaptation is broadly applied and is central to comparative psychology and cognition. The concept of an adaptive specialization of learning is a proposed account for exceptions to general learning processes, as seen in studies of Pavlovian conditioning of taste aversions, sexual responses, and fear. The evidence generally consists of selective associations forming between biologically relevant conditioned and unconditioned stimuli, with conditioned responses differing in magnitude, persistence, or other measures relative to non-biologically relevant stimuli. Selective associations for biologically relevant stimuli may suggest adaptive specializations of learning, but do not necessarily confirm adaptive hypotheses as conceived of in evolutionary biology. Exceptions to general learning processes do not necessarily default to an adaptive specialization explanation, even if experimental results "make biological sense". This paper examines the degree to which hypotheses of adaptive specializations of learning in sexual and fear response systems have been tested using methodologies developed in evolutionary biology (e.g., comparative methods, quantitative and molecular genetics, survival experiments). A broader aim is to offer perspectives from evolutionary biology for testing adaptive hypotheses in psychological science.
Adaptive laboratory evolution – principles and applications for biotechnology
2013-01-01
Adaptive laboratory evolution is a frequent method in biological studies to gain insights into the basic mechanisms of molecular evolution and adaptive changes that accumulate in microbial populations during long term selection under specified growth conditions. Although regularly performed for more than 25 years, the advent of transcript and cheap next-generation sequencing technologies has resulted in many recent studies, which successfully applied this technique in order to engineer microbial cells for biotechnological applications. Adaptive laboratory evolution has some major benefits as compared with classical genetic engineering but also some inherent limitations. However, recent studies show how some of the limitations may be overcome in order to successfully incorporate adaptive laboratory evolution in microbial cell factory design. Over the last two decades important insights into nutrient and stress metabolism of relevant model species were acquired, whereas some other aspects such as niche-specific differences of non-conventional cell factories are not completely understood. Altogether the current status and its future perspectives highlight the importance and potential of adaptive laboratory evolution as approach in biotechnological engineering. PMID:23815749
Antarctic Atmospheric Infrasound.
1981-11-30
auroral infra - sonic waves and the atmospheric test of a nuclear weapon in China were all recorded and analyzed in real-time by the new system as...Detection Enhancement by a Pure State Filter, 16 February 1981 The great success of the polarization filter technique with infra - sonic data led to our...Project chronology ) 2. Summary of data collected 3. Antarctic infrasonic signals 4. Noise suppression using data-adaptive polarization filters: appli
DOT National Transportation Integrated Search
1985-03-01
A report is offered on a study of the information activities within the Right-of-Way section of ADOT. The objectives of the study were to adapt and apply techniques to measure user-perceived needs, satisfaction and utility of services provided Right-...
Microleakage Evaluation at Implant-Abutment Interface Using Radiotracer Technique
Siadat, Hakimeh; Arshad, Mahnaz; Mahgoli, Hossein-Ali; Fallahi, Babak
2016-01-01
Objectives: Microbial leakage through the implant-abutment (I-A) interface results in bacterial colonization in two-piece implants. The aim of this study was to compare microleakage rates in three types of Replace abutments namely Snappy, GoldAdapt, and customized ceramic using radiotracing. Materials and Methods: Three groups, one for each abutment type, of five implants and one positive and one negative control were considered (a total of 17 regular body implants). A torque of 35 N/cm was applied to the abutments. The samples were immersed in thallium 201 radioisotope solution for 24 hours to let the radiotracers leak through the I-A interface. Then, gamma photons received from the radiotracers were counted using a gamma counter device. In the next phase, cyclic fatigue loading process was applied followed by the same steps of immersion in the radioactive solution and photon counting. Results: Rate of microleakage significantly increased (P≤0.05) in all three types of abutments (i.e. Snappy, GoldAdapt, and ceramic) after cyclic loading. No statistically significant differences were observed between abutment types after cyclic loading. Conclusions: Microleakage significantly increases after cyclic loading in all three Replace abutments (GoldAdapt, Snappy, ceramic). Lowest microleakage before and after cyclic loading was observed in GoldAdapt followed by Snappy and ceramic. PMID:28392814
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablonowski, Christiane
The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less
Robust video super-resolution with registration efficiency adaptation
NASA Astrophysics Data System (ADS)
Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen
2010-07-01
Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.
Real-time Measurement of Epithelial Barrier Permeability in Human Intestinal Organoids.
Hill, David R; Huang, Sha; Tsai, Yu-Hwai; Spence, Jason R; Young, Vincent B
2017-12-18
Advances in 3D culture of intestinal tissues obtained through biopsy or generated from pluripotent stem cells via directed differentiation, have resulted in sophisticated in vitro models of the intestinal mucosa. Leveraging these emerging model systems will require adaptation of tools and techniques developed for 2D culture systems and animals. Here, we describe a technique for measuring epithelial barrier permeability in human intestinal organoids in real-time. This is accomplished by microinjection of fluorescently-labeled dextran and imaging on an inverted microscope fitted with epifluorescent filters. Real-time measurement of the barrier permeability in intestinal organoids facilitates the generation of high-resolution temporal data in human intestinal epithelial tissue, although this technique can also be applied to fixed timepoint imaging approaches. This protocol is readily adaptable for the measurement of epithelial barrier permeability following exposure to pharmacologic agents, bacterial products or toxins, or live microorganisms. With minor modifications, this protocol can also serve as a general primer on microinjection of intestinal organoids and users may choose to supplement this protocol with additional or alternative downstream applications following microinjection.
Visible near-diffraction-limited lucky imaging with full-sky laser-assisted adaptive optics
NASA Astrophysics Data System (ADS)
Basden, A. G.
2014-08-01
Both lucky imaging techniques and adaptive optics require natural guide stars, limiting sky-coverage, even when laser guide stars are used. Lucky imaging techniques become less successful on larger telescopes unless adaptive optics is used, as the fraction of images obtained with well-behaved turbulence across the whole telescope pupil becomes vanishingly small. Here, we introduce a technique combining lucky imaging techniques with tomographic laser guide star adaptive optics systems on large telescopes. This technique does not require any natural guide star for the adaptive optics, and hence offers full sky-coverage adaptive optics correction. In addition, we introduce a new method for lucky image selection based on residual wavefront phase measurements from the adaptive optics wavefront sensors. We perform Monte Carlo modelling of this technique, and demonstrate I-band Strehl ratios of up to 35 per cent in 0.7 arcsec mean seeing conditions with 0.5 m deformable mirror pitch and full adaptive optics sky-coverage. We show that this technique is suitable for use with lucky imaging reference stars as faint as magnitude 18, and fainter if more advanced image selection and centring techniques are used.
Fung, Albert; Kelly, Paul; Tait, Gordon; Greig, Paul D; McGilvray, Ian D
2016-01-01
The potential for integrating real-time surgical video and state-of-the art animation techniques has not been widely applied to surgical education. This paper describes the use of new technology for creating videos of liver, pancreas and transplant surgery, annotating them with 3D animations, resulting in a freely-accessible online resource: The Toronto Video Atlas of Liver, Pancreas and Transplant Surgery ( http://tvasurg.ca ). The atlas complements the teaching provided to trainees in the operating room, and the techniques described in this study can be readily adapted by other surgical training programmes.
Kernel and divergence techniques in high energy physics separations
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Applications of Kalman filtering to real-time trace gas concentration measurements
NASA Technical Reports Server (NTRS)
Leleux, D. P.; Claps, R.; Chen, W.; Tittel, F. K.; Harman, T. L.
2002-01-01
A Kalman filtering technique is applied to the simultaneous detection of NH3 and CO2 with a diode-laser-based sensor operating at 1.53 micrometers. This technique is developed for improving the sensitivity and precision of trace gas concentration levels based on direct overtone laser absorption spectroscopy in the presence of various sensor noise sources. Filter performance is demonstrated to be adaptive to real-time noise and data statistics. Additionally, filter operation is successfully performed with dynamic ranges differing by three orders of magnitude. Details of Kalman filter theory applied to the acquired spectroscopic data are discussed. The effectiveness of this technique is evaluated by performing NH3 and CO2 concentration measurements and utilizing it to monitor varying ammonia and carbon dioxide levels in a bioreactor for water reprocessing, located at the NASA-Johnson Space Center. Results indicate a sensitivity enhancement of six times, in terms of improved minimum detectable absorption by the gas sensor.
Adaptive array technique for differential-phase reflectometry in QUEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idei, H., E-mail: idei@triam.kyushu-u.ac.jp; Hanada, K.; Zushi, H.
2014-11-15
A Phased Array Antenna (PAA) was considered as launching and receiving antennae in reflectometry to attain good directivity in its applied microwave range. A well-focused beam was obtained in a launching antenna application, and differential-phase evolution was properly measured by using a metal reflector plate in the proof-of-principle experiment at low power test facilities. Differential-phase evolution was also evaluated by using the PAA in the Q-shu University Experiment with Steady State Spherical Tokamak (QUEST). A beam-forming technique was applied in receiving phased-array antenna measurements. In the QUEST device that should be considered as a large oversized cavity, standing wave effectmore » was significantly observed with perturbed phase evolution. A new approach using derivative of measured field on propagating wavenumber was proposed to eliminate the standing wave effect.« less
Information Processing Techniques Program. Volume II. Communications- Adaptive Internetting
1977-09-30
LABORATORY INFORMATION PROCESSING TECHNIQUES PROGRAM VOLUME II: COMMUNICATIONS-ADAPTIVE INTERNETTING I SEMIANNUAL TECHNICAL SUMMARY REPORT TO THE...MASSACHUSETTS ABSTRACT This repori describes work performed on the Communications-Adaptive Internetting program sponsored by the Information ... information processing techniques network speech terminal communicatlons-adaptive internetting 04 links digital voice communications time-varying
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems
Li, Zhilin; Song, Peng
2013-01-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
Employing continuous quality improvement in community-based substance abuse programs.
Chinman, Matthew; Hunter, Sarah B; Ebener, Patricia
2012-01-01
This article aims to describe continuous quality improvement (CQI) for substance abuse prevention and treatment programs in a community-based organization setting. CQI (e.g., plan-do-study-act cycles (PDSA)) applied in healthcare and industry was adapted for substance abuse prevention and treatment programs in a community setting. The authors assessed the resources needed, acceptability and CQI feasibility for ten programs by evaluating CQI training workshops with program staff and a series of three qualitative interviews over a nine-month implementation period with program participants. The CQI activities, PDSA cycle progress, effort, enthusiasm, benefits and challenges were examined. Results indicated that CQI was feasible and acceptable for community-based substance abuse prevention and treatment programs; however, some notable resource challenges remain. Future studies should examine CQI impact on service quality and intended program outcomes. The study was conducted on a small number of programs. It did not assess CQI impact on service quality and intended program outcomes. Practical implications- This project shows that it is feasible to adapt CQI techniques and processes for community-based programs substance abuse prevention and treatment programs. These techniques may help community-based program managers to improve service quality and achieve program outcomes. This is one of the first studies to adapt traditional CQI techniques for community-based settings delivering substance abuse prevention and treatment programs.
The Effect of Multispectral Image Fusion Enhancement on Human Efficiency
2017-03-20
human visual system by applying a technique commonly used in visual percep- tion research : ideal observer analysis. Using this approach, we establish...applications, analytic tech- niques, and procedural methods used across studies. This paper uses ideal observer analysis to establish a frame- work that allows...augmented similarly to incorpo- rate research involving more complex stimulus content. Additionally, the ideal observer can be adapted for a number of
Local adaptive contrast enhancement for color images
NASA Astrophysics Data System (ADS)
Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer
2007-04-01
A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.
Advances in the in-field detection of microorganisms in ice.
Barnett, Megan J; Pearce, David A; Cullen, David C
2012-01-01
The historic view of ice-bound ecosystems has been one of a predominantly lifeless environment, where microorganisms certainly exist but are assumed to be either completely inactive or in a state of long-term dormancy. However, this standpoint has been progressively overturned in the past 20years as studies have started to reveal the importance of microbial life in the functioning of these environments. Our present knowledge of the distribution, taxonomy, and metabolic activity of such microbial life has been derived primarily from laboratory-based analyses of collected field samples. To date, only a restricted range of life detection and characterization techniques have been applied in the field. Specific examples include direct observation and DNA-based techniques (microscopy, specific stains, and community profiling based on PCR amplification), the detection of biomarkers (such as adenosine triphosphate), and measurements of metabolism [through the uptake and incorporation of radiolabeled isotopes or chemical alteration of fluorescent substrates (umbelliferones are also useful here)]. On-going improvements in technology mean that smaller and more robust life detection and characterization systems are continually being designed, manufactured, and adapted for in-field use. Adapting technology designed for other applications is the main source of new methodology, and the range of techniques is currently increasing rapidly. Here we review the current use of technology and techniques to detect and characterize microbial life within icy environments and specifically its deployment to in-field situations. We discuss the necessary considerations, limitations, and adaptations, review emerging technologies, and highlight the future potential. Successful application of these new techniques to in-field studies will certainly generate new insights into the way ice bound ecosystems function. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polo, J.; Wilbert, S.; Ruiz-Arias, J. A.
2016-07-01
At any site, the bankability of a projected solar power plant largely depends on the accuracy and general quality of the solar radiation data generated during the solar resource assessment phase. The term 'site adaptation' has recently started to be used in the framework of solar energy projects to refer to the improvement that can be achieved in satellite-derived solar irradiance and model data when short-term local ground measurements are used to correct systematic errors and bias in the original dataset. This contribution presents a preliminary survey of different possible techniques that can improve long-term satellite-derived and model-derived solar radiationmore » data through the use of short-term on-site ground measurements. The possible approaches that are reported here may be applied in different ways, depending on the origin and characteristics of the uncertainties in the modeled data. This work, which is the first step of a forthcoming in-depth assessment of methodologies for site adaptation, has been done within the framework of the International Energy Agency Solar Heating and Cooling Programme Task 46 'Solar Resource Assessment and Forecasting.'« less
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.
2006-01-01
The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Adaptive texture filtering for defect inspection in ultrasound images
NASA Astrophysics Data System (ADS)
Zmola, Carl; Segal, Andrew C.; Lovewell, Brian; Nash, Charles
1993-05-01
The use of ultrasonic imaging to analyze defects and characterize materials is critical in the development of non-destructive testing and non-destructive evaluation (NDT/NDE) tools for manufacturing. To develop better quality control and reliability in the manufacturing environment advanced image processing techniques are useful. For example, through the use of texture filtering on ultrasound images, we have been able to filter characteristic textures from highly-textured C-scan images of materials. The materials have highly regular characteristic textures which are of the same resolution and dynamic range as other important features within the image. By applying texture filters and adaptively modifying their filter response, we have examined a family of filters for removing these textures.
Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model
NASA Astrophysics Data System (ADS)
Kim, Sangjo; Kim, Kuisoon; Son, Changmin
2018-04-01
An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Application of adaptive filters in denoising magnetocardiogram signals
NASA Astrophysics Data System (ADS)
Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.
2017-05-01
Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.
Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith
2011-01-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089
Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.
Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith
2010-09-01
Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.
Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen
2018-04-01
In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.
Tunable laser techniques for improving the precision of observational astronomy
NASA Astrophysics Data System (ADS)
Cramer, Claire E.; Brown, Steven W.; Lykke, Keith R.; Woodward, John T.; Bailey, Stephen; Schlegel, David J.; Bolton, Adam S.; Brownstein, Joel; Doherty, Peter E.; Stubbs, Christopher W.; Vaz, Amali; Szentgyorgyi, Andrew
2012-09-01
Improving the precision of observational astronomy requires not only new telescopes and instrumentation, but also advances in observing protocols, calibrations and data analysis. The Laser Applications Group at the National Institute of Standards and Technology in Gaithersburg, Maryland has been applying advances in detector metrology and tunable laser calibrations to problems in astronomy since 2007. Using similar measurement techniques, we have addressed a number of seemingly disparate issues: precision flux calibration for broad-band imaging, precision wavelength calibration for high-resolution spectroscopy, and precision PSF mapping for fiber spectrographs of any resolution. In each case, we rely on robust, commercially-available laboratory technology that is readily adapted to use at an observatory. In this paper, we give an overview of these techniques.
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
[Impact of digital technology on clinical practices: perspectives from surgery].
Zhang, Y; Liu, X J
2016-04-09
Digital medical technologies or computer aided medical procedures, refer to imaging, 3D reconstruction, virtual design, 3D printing, navigation guided surgery and robotic assisted surgery techniques. These techniques are integrated into conventional surgical procedures to create new clinical protocols that are known as "digital surgical techniques". Conventional health care is characterized by subjective experiences, while digital medical technologies bring quantifiable information, transferable data, repeatable methods and predictable outcomes into clinical practices. Being integrated into clinical practice, digital techniques facilitate surgical care by improving outcomes and reducing risks. Digital techniques are becoming increasingly popular in trauma surgery, orthopedics, neurosurgery, plastic and reconstructive surgery, imaging and anatomic sciences. Robotic assisted surgery is also evolving and being applied in general surgery, cardiovascular surgery and orthopedic surgery. Rapid development of digital medical technologies is changing healthcare and clinical practices. It is therefore important for all clinicians to purposefully adapt to these technologies and improve their clinical outcomes.
Adaptive Flight Control for Aircraft Safety Enhancements
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Gregory, Irene M.; Joshi, Suresh M.
2008-01-01
This poster presents the current adaptive control research being conducted at NASA ARC and LaRC in support of the Integrated Resilient Aircraft Control (IRAC) project. The technique "Approximate Stability Margin Analysis of Hybrid Direct-Indirect Adaptive Control" has been developed at NASA ARC to address the needs for stability margin metrics for adaptive control that potentially enables future V&V of adaptive systems. The technique "Direct Adaptive Control With Unknown Actuator Failures" is developed at NASA LaRC to deal with unknown actuator failures. The technique "Adaptive Control with Adaptive Pilot Element" is being researched at NASA LaRC to investigate the effects of pilot interactions with adaptive flight control that can have implications of stability and performance.
Adaptive sleep-wake discrimination for wearable devices.
Karlen, Walter; Floreano, Dario
2011-04-01
Sleep/wake classification systems that rely on physiological signals suffer from intersubject differences that make accurate classification with a single, subject-independent model difficult. To overcome the limitations of intersubject variability, we suggest a novel online adaptation technique that updates the sleep/wake classifier in real time. The objective of the present study was to evaluate the performance of a newly developed adaptive classification algorithm that was embedded on a wearable sleep/wake classification system called SleePic. The algorithm processed ECG and respiratory effort signals for the classification task and applied behavioral measurements (obtained from accelerometer and press-button data) for the automatic adaptation task. When trained as a subject-independent classifier algorithm, the SleePic device was only able to correctly classify 74.94 ± 6.76% of the human-rated sleep/wake data. By using the suggested automatic adaptation method, the mean classification accuracy could be significantly improved to 92.98 ± 3.19%. A subject-independent classifier based on activity data only showed a comparable accuracy of 90.44 ± 3.57%. We demonstrated that subject-independent models used for online sleep-wake classification can successfully be adapted to previously unseen subjects without the intervention of human experts or off-line calibration.
Optimal spectral tracking--adapting to dynamic regime change.
Brittain, John-Stuart; Halliday, David M
2011-01-30
Real world data do not always obey the statistical restraints imposed upon them by sophisticated analysis techniques. In spectral analysis for instance, an ergodic process--the interchangeability of temporal for spatial averaging--is assumed for a repeat-trial design. Many evolutionary scenarios, such as learning and motor consolidation, do not conform to such linear behaviour and should be approached from a more flexible perspective. To this end we previously introduced the method of optimal spectral tracking (OST) in the study of trial-varying parameters. In this extension to our work we modify the OST routines to provide an adaptive implementation capable of reacting to dynamic transitions in the underlying system state. In so doing, we generalise our approach to characterise both slow-varying and rapid fluctuations in time-series, simultaneously providing a metric of system stability. The approach is first applied to a surrogate dataset and compared to both our original non-adaptive solution and spectrogram approaches. The adaptive OST is seen to display fast convergence and desirable statistical properties. All three approaches are then applied to a neurophysiological recording obtained during a study on anaesthetic monitoring. Local field potentials acquired from the posterior hypothalamic region of a deep brain stimulation patient undergoing anaesthesia were analysed. The characterisation of features such as response delay, time-to-peak and modulation brevity are considered. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lopez Lopez, Roberto
2013-02-01
This work describes the concept, design, development, evolution and application of the FastCam instrument. FastCam is an image photometer for astronomy with image capture in a high-frequency range and diraction limited, in order to apply the Lucky Imaging technique to medium- and large-sized ( 1.5 to 4 m) telescopes. The Lucky Imaging technique allows, for ground-based telescopes, to achieve the resolution limit for astronomical images under suitable conditions. This work describes the atmospheric problems and the active and adaptive optics techniques to solve them, as well as the Lucky Imaging fundamentals. A description of the considerations to the project development and design parameters is performed. Then, the optical design and dierent adaptations to several telescopes will be revised. In a next step, some of the scientic results obtained thanks to this project are shown, both in position astronomy and complex structures in globular cluster and binary systems. Dierent designs arising from the basic idea and the instruments now in development that are expanding the system's capabilities and the technique are explained. Some other possible applications to other elds in which the image sharpness is necessary despite uctuations or instabilities of the observing system will be also pointed out: ophthalmology, video-control, etc.
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
Spectrum analysis on quality requirements consideration in software design documents.
Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji
2013-12-01
Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.
NASA Technical Reports Server (NTRS)
Parnell, Gregory S.; Rowell, William F.; Valusek, John R.
1987-01-01
In recent years there has been increasing interest in applying the computer based problem solving techniques of Artificial Intelligence (AI), Operations Research (OR), and Decision Support Systems (DSS) to analyze extremely complex problems. A conceptual framework is developed for successfully integrating these three techniques. First, the fields of AI, OR, and DSS are defined and the relationships among the three fields are explored. Next, a comprehensive adaptive design methodology for AI and OR modeling within the context of a DSS is described. These observations are made: (1) the solution of extremely complex knowledge problems with ill-defined, changing requirements can benefit greatly from the use of the adaptive design process, (2) the field of DSS provides the focus on the decision making process essential for tailoring solutions to these complex problems, (3) the characteristics of AI, OR, and DSS tools appears to be converging rapidly, and (4) there is a growing need for an interdisciplinary AI/OR/DSS education.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Improving Estimates Of Phase Parameters When Amplitude Fluctuates
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1989-01-01
Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.
Applying Hyperspectral Imaging to Heart Rate Estimation for Adaptive Automation
2013-03-01
Shoji, Takae , 18 Kuge, & Yamamura, 2009). In this study 15 participants performed three MATB trials in the same order and came back three days in a...Miyake, Yamada, Shoji, Takae , Kuge, & Yamamura, 2009). They found that LF/HF did not correlate with difficulty level; however, they did find that the LF...0.1 Hz) component did show high test/retest correlations (Miyake, Yamada, Shoji, Takae , Kuge, & Yamamura, 2009). Although this technique shows much
Advances in numerical and applied mathematics
NASA Technical Reports Server (NTRS)
South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)
1986-01-01
This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
NASA Technical Reports Server (NTRS)
Wing, L. D.
1979-01-01
Simplified analytical techniques of sounding rocket programs are suggested as a means of bringing the cost of thermal analysis of the Get Away Special (GAS) payloads within acceptable bounds. Particular attention is given to two methods adapted from sounding rocket technology - a method in which the container and payload are assumed to be divided in half vertically by a thermal plane of symmetry, and a method which considers the container and its payload to be an analogous one-dimensional unit having the real or correct container top surface area for radiative heat transfer and a fictitious mass and geometry which model the average thermal effects.
Compressed sensing system considerations for ECG and EMG wireless biosensors.
Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J
2012-04-01
Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.
Head-mounted active noise control system with virtual sensing technique
NASA Astrophysics Data System (ADS)
Miyazaki, Nobuhiro; Kajikawa, Yoshinobu
2015-03-01
In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.
NASA Astrophysics Data System (ADS)
Gao, Gang; Wang, Jinzhi; Wang, Xianghua
2017-05-01
This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.
Adaptive bit plane quadtree-based block truncation coding for image compression
NASA Astrophysics Data System (ADS)
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Devin A., E-mail: dmatthews@utexas.edu; Stanton, John F.
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating anmore » efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))« less
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
NASA Astrophysics Data System (ADS)
Evans, William J.; Yoo, Choong-Shik; Lee, Geun Woo; Cynn, Hyunchae; Lipp, Magnus J.; Visbeck, Ken
2007-07-01
We have developed a unique device, a dynamic diamond anvil cell (dDAC), which repetitively applies a time-dependent load/pressure profile to a sample. This capability allows studies of the kinetics of phase transitions and metastable phases at compression (strain) rates of up to 500GPa/s (˜0.16s-1 for a metal). Our approach adapts electromechanical piezoelectric actuators to a conventional diamond anvil cell design, which enables precise specification and control of a time-dependent applied load/pressure. Existing DAC instrumentation and experimental techniques are easily adapted to the dDAC to measure the properties of a sample under the varying load/pressure conditions. This capability addresses the sparsely studied regime of dynamic phenomena between static research (diamond anvil cells and large volume presses) and dynamic shock-driven experiments (gas guns, explosive, and laser shock). We present an overview of a variety of experimental measurements that can be made with this device.
DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and stillmore » serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.« less
NASA Astrophysics Data System (ADS)
Heddam, Salim; Kisi, Ozgur
2018-04-01
In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
Robot Science Autonomy in the Atacama Desert and Beyond
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wettergreen, David S.
2013-01-01
Science-guided autonomy augments rovers with reasoning to make observations and take actions related to the objectives of scientific exploration. When rovers can directly interpret instrument measurements then scientific goals can inform and adapt ongoing navigation decisions. These autonomous explorers will make better scientific observations and collect massive, accurate datasets. In current astrobiology studies in the Atacama Desert we are applying algorithms for science autonomy to choose effective observations and measurements. Rovers are able to decide when and where to take follow-up actions that deepen scientific understanding. These techniques apply to planetary rovers, which we can illustrate with algorithms now used by Mars rovers and by discussing future missions.
Popescu, Elena Anda; Barlow, Steven M; Venkatesan, Lalit; Wang, Jingyan; Popescu, Mihai
2013-06-01
Cortical adaptation in the primary somatosensory cortex (SI) has been probed using different stimulation modalities and recording techniques, in both human and animal studies. In contrast, considerably less knowledge has been gained about the adaptation profiles in other areas of the cortical somatosensory network. Using magnetoencephalography (MEG), we examined the patterns of short-term adaptation for evoked responses in SI and somatosensory association areas during tactile stimulation applied to the glabrous skin of the hand. Cutaneous stimuli were delivered as trains of serial pulses with a constant frequency of 2 Hz and 4 Hz in separate runs, and a constant inter-train interval of 5 s. The unilateral stimuli elicited transient responses to the serial pulses in the train, with several response components that were separated by independent component analysis. Subsequent source reconstruction techniques identified regional generators in the contralateral SI and somatosensory association areas in the posterior parietal cortex (PPC). Activity in the bilateral secondary somatosensory cortex (i.e., SII/PV) was also identified, although less consistently across subjects. The dynamics of the evoked activity in each area and the frequency-dependent adaptation effects were assessed from the changes in the relative amplitude of serial responses in each train. We show that the adaptation profiles in SI and PPC areas can be quantitatively characterized from neuromagnetic recordings using tactile stimulation, with the sensitivity to repetitive stimulation increasing from SI to PPC. A similar approach for SII/PV has proven less straightforward, potentially due to the tendency of these areas to respond selectively to certain stimuli. Copyright © 2011 Wiley Periodicals, Inc.
Applied adaptive disturbance rejection using output redefinition on magnetic bearings
NASA Astrophysics Data System (ADS)
Matras, Alex Logan
Recent work has shown Adaptive Disturbance Rejection to be an effective technique for rejecting forces due to imbalance, runout and base motion disturbances on flywheels supported by magnetic bearings over a large span of frequencies. Often the applicability of some of the adaptive methods is limited because they require certain properties (such as almost-strict positive realness) that magnetic bearings do not possess. In this thesis, one method for adaptive disturbance rejection, called Adaptive Feedforward Cancellation (AFC), is modified to allow for a much wider range of frequencies to be rejected. This is accomplished by redefining the output of the original system to be the output from a reduced order state estimator instead. This can give a new system with an infinite gain margin. Additionally, the adaptation laws for the two disturbance rejection gains are slightly modified so that each adapts to a different signal in order to provide the best performance. A detailed model of a magnetic bearing is developed and computer simulations based on that model are performed to give an initial test of the new control law. A state-of-the-art magnetic bearing setup is then developed and used to implement the new control laws and determine their effectiveness. The results are successful and validate the new ideas that are presented.
The environmental genomics of metazoan thermal adaptation
Porcelli, D; Butlin, R K; Gaston, K J; Joly, D; Snook, R R
2015-01-01
Continued and accelerating change in the thermal environment places an ever-greater priority on understanding how organisms are going to respond. The paradigm of ‘move, adapt or die', regarding ways in which organisms can respond to environmental stressors, stimulates intense efforts to predict the future of biodiversity. Assuming that extinction is an unpalatable outcome, researchers have focussed attention on how organisms can shift in their distribution to stay in the same thermal conditions or can stay in the same place by adapting to a changing thermal environment. How likely these respective outcomes might be depends on the answer to a fundamental evolutionary question, namely what genetic changes underpin adaptation to the thermal environment. The increasing access to and decreasing costs of next-generation sequencing (NGS) technologies, which can be applied to both model and non-model systems, provide a much-needed tool for understanding thermal adaptation. Here we consider broadly what is already known from non-NGS studies about thermal adaptation, then discuss the benefits and challenges of different NGS methodologies to add to this knowledge base. We then review published NGS genomics and transcriptomics studies of thermal adaptation to heat stress in metazoans and compare these results with previous non-NGS patterns. We conclude by summarising emerging patterns of genetic response and discussing future directions using these increasingly common techniques. PMID:25735594
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Real-time vehicle noise cancellation techniques for gunshot acoustics
NASA Astrophysics Data System (ADS)
Ramos, Antonio L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald
2012-06-01
Acoustical sniper positioning systems rely on the detection and direction-of-arrival (DOA) estimation of the shockwave and the muzzle blast in order to provide an estimate of a potential snipers location. Field tests have shown that detecting and estimating the DOA of the muzzle blast is a rather difficult task in the presence of background noise sources, e.g., vehicle noise, especially in long range detection and absorbing terrains. In our previous work presented in the 2011 edition of this conference we highlight the importance of improving the SNR of the gunshot signals prior to the detection and recognition stages, aiming at lowering the false alarm and miss-detection rates and, thereby, increasing the reliability of the system. This paper reports on real-time noise cancellation techniques, like Spectral Subtraction and Adaptive Filtering, applied to gunshot signals. Our model assumes the background noise as being short-time stationary and uncorrelated to the impulsive gunshot signals. In practice, relatively long periods without signal occur and can be used to estimate the noise spectrum and its first and second order statistics as required in the spectral subtraction and adaptive filtering techniques, respectively. The results presented in this work are supported with extensive simulations based on real data.
High-order multiband encoding in the heart.
Cunningham, Charles H; Wright, Graham A; Wood, Michael L
2002-10-01
Spatial encoding with multiband selective excitation (e.g., Hadamard encoding) has been restricted to a small number of slices because the RF pulse becomes unacceptably long when more than about eight slices are encoded. In this work, techniques to shorten multiband RF pulses, and thus allow larger numbers of slices, are investigated. A method for applying the techniques while retaining the capability of adaptive slice thickness is outlined. A tradeoff between slice thickness and pulse duration is shown. Simulations and experiments with the shortened pulses confirmed that motion-induced excitation profile blurring and phase accrual were reduced. The connection between gradient hardware limitations, slice thickness, and flow sensitivity is shown. Excitation profiles for encoding 32 contiguous slices of 1-mm thickness were measured experimentally, and the artifact resulting from errors in timing of RF pulse relative to gradient was investigated. A multiband technique for imaging 32 contiguous 2-mm slices, with adaptive slice thickness, was developed and demonstrated for coronary artery imaging in healthy subjects. With the ability to image high numbers of contiguous slices, using relatively short (1-2 ms) RF pulses, multiband encoding has been advanced further toward practical application. Copyright 2002 Wiley-Liss, Inc.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
Neural network based adaptive control for nonlinear dynamic regimes
NASA Astrophysics Data System (ADS)
Shin, Yoonghyun
Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named 'composite model reference adaptive control' is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of 'pseudo-control hedging' techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.
NASA Astrophysics Data System (ADS)
Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir
2013-07-01
The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.
Mass Spectrometry Imaging of Complex Microbial Communities
2016-01-01
Conspectus In the two decades since mass spectrometry imaging (MSI) was first applied to visualize the distribution of peptides across biological tissues and cells, the technique has become increasingly effective and reliable. MSI excels at providing complementary information to existing methods for molecular analysis—such as genomics, transcriptomics, and metabolomics—and stands apart from other chemical imaging modalities through its capability to generate information that is simultaneously multiplexed and chemically specific. Today a diverse family of MSI approaches are applied throughout the scientific community to study the distribution of proteins, peptides, and small-molecule metabolites across many biological models. The inherent strengths of MSI make the technique valuable for studying microbial systems. Many microbes reside in surface-attached multicellular and multispecies communities, such as biofilms and motile colonies, where they work together to harness surrounding nutrients, fend off hostile organisms, and shield one another from adverse environmental conditions. These processes, as well as many others essential for microbial survival, are mediated through the production and utilization of a diverse assortment of chemicals. Although bacterial cells are generally only a few microns in diameter, the ecologies they influence can encompass entire ecosystems, and the chemical changes that they bring about can occur over time scales ranging from milliseconds to decades. Because of their incredible complexity, our understanding of and influence over microbial systems requires detailed scientific evaluations that yield both chemical and spatial information. MSI is well-positioned to fulfill these requirements. With small adaptations to existing methods, the technique can be applied to study a wide variety of chemical interactions, including those that occur inside single-species microbial communities, between cohabitating microbes, and between microbes and their hosts. In recognition of this potential for scientific advancement, researchers have adapted MSI methodologies for the specific needs of the microbiology research community. As a result, workflows exist for imaging microbial systems with many of the common MSI ionization methods. Despite this progress, there is substantial room for improvements in instrumentation, sample preparation, and data interpretation. This Account provides a brief overview of the state of technology in microbial MSI, illuminates selected applications that demonstrate the potential of the technique, and highlights a series of development challenges that are needed to move the field forward. In the coming years, as microbial MSI becomes easier to use and more universally applicable, the technique will evolve into a fundamental tool widely applied throughout many divisions of science, medicine, and industry. PMID:28001363
Mass Spectrometry Imaging of Complex Microbial Communities.
Dunham, Sage J B; Ellis, Joseph F; Li, Bin; Sweedler, Jonathan V
2017-01-17
In the two decades since mass spectrometry imaging (MSI) was first applied to visualize the distribution of peptides across biological tissues and cells, the technique has become increasingly effective and reliable. MSI excels at providing complementary information to existing methods for molecular analysis-such as genomics, transcriptomics, and metabolomics-and stands apart from other chemical imaging modalities through its capability to generate information that is simultaneously multiplexed and chemically specific. Today a diverse family of MSI approaches are applied throughout the scientific community to study the distribution of proteins, peptides, and small-molecule metabolites across many biological models. The inherent strengths of MSI make the technique valuable for studying microbial systems. Many microbes reside in surface-attached multicellular and multispecies communities, such as biofilms and motile colonies, where they work together to harness surrounding nutrients, fend off hostile organisms, and shield one another from adverse environmental conditions. These processes, as well as many others essential for microbial survival, are mediated through the production and utilization of a diverse assortment of chemicals. Although bacterial cells are generally only a few microns in diameter, the ecologies they influence can encompass entire ecosystems, and the chemical changes that they bring about can occur over time scales ranging from milliseconds to decades. Because of their incredible complexity, our understanding of and influence over microbial systems requires detailed scientific evaluations that yield both chemical and spatial information. MSI is well-positioned to fulfill these requirements. With small adaptations to existing methods, the technique can be applied to study a wide variety of chemical interactions, including those that occur inside single-species microbial communities, between cohabitating microbes, and between microbes and their hosts. In recognition of this potential for scientific advancement, researchers have adapted MSI methodologies for the specific needs of the microbiology research community. As a result, workflows exist for imaging microbial systems with many of the common MSI ionization methods. Despite this progress, there is substantial room for improvements in instrumentation, sample preparation, and data interpretation. This Account provides a brief overview of the state of technology in microbial MSI, illuminates selected applications that demonstrate the potential of the technique, and highlights a series of development challenges that are needed to move the field forward. In the coming years, as microbial MSI becomes easier to use and more universally applicable, the technique will evolve into a fundamental tool widely applied throughout many divisions of science, medicine, and industry.
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
An adaptive array antenna for mobile satellite communications
NASA Technical Reports Server (NTRS)
Milne, Robert
1988-01-01
The adaptive array is linearly polarized and consists essentially of a driven lambda/4 monopole surrounded by an array of parasitic elements all mounted on a ground plane of finite size. The parasitic elements are all connected to ground via pin diodes. By applying suitable bias voltages, the desired parasitic elements can be activated and made highly reflective. The directivity and pointing of the antenna beam can be controlled in both the azimuth and elevation planes using high speed digital switching techniques. The antenna RF losses are neglible and the maximum gain is close to the theoretical value determined by the effective aperture size. The antenna is compact, has a low profile, is inexpensive to manufacture and can handle high transmitter power.
Time-delayed directional beam phased array antenna
Fund, Douglas Eugene; Cable, John William; Cecil, Tony Myron
2004-10-19
An antenna comprising a phased array of quadrifilar helix or other multifilar antenna elements and a time-delaying feed network adapted to feed the elements. The feed network can employ a plurality of coaxial cables that physically bridge a microstrip feed circuitry to feed power signals to the elements. The cables provide an incremental time delay which is related to their physical lengths, such that replacing cables having a first set of lengths with cables having a second set of lengths functions to change the time delay and shift or steer the antenna's main beam. Alternatively, the coaxial cables may be replaced with a programmable signal processor unit adapted to introduce the time delay using signal processing techniques applied to the power signals.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
Fayers, Peter M
2007-01-01
We review the papers presented at the NCI/DIA conference, to identify areas of controversy and uncertainty, and to highlight those aspects of item response theory (IRT) and computer adaptive testing (CAT) that require theoretical or empirical research in order to justify their application to patient reported outcomes (PROs). IRT and CAT offer exciting potential for the development of a new generation of PRO instruments. However, most of the research into these techniques has been in non-healthcare settings, notably in education. Educational tests are very different from PRO instruments, and consequently problematic issues arise when adapting IRT and CAT to healthcare research. Clinical scales differ appreciably from educational tests, and symptoms have characteristics distinctly different from examination questions. This affects the transferring of IRT technology. Particular areas of concern when applying IRT to PROs include inadequate software, difficulties in selecting models and communicating results, insufficient testing of local independence and other assumptions, and a need of guidelines for estimating sample size requirements. Similar concerns apply to differential item functioning (DIF), which is an important application of IRT. Multidimensional IRT is likely to be advantageous only for closely related PRO dimensions. Although IRT and CAT provide appreciable potential benefits, there is a need for circumspection. Not all PRO scales are necessarily appropriate targets for this methodology. Traditional psychometric methods, and especially qualitative methods, continue to have an important role alongside IRT. Research should be funded to address the specific concerns that have been identified.
Bayesian Decision Tree for the Classification of the Mode of Motion in Single-Molecule Trajectories
Türkcan, Silvan; Masson, Jean-Baptiste
2013-01-01
Membrane proteins move in heterogeneous environments with spatially (sometimes temporally) varying friction and with biochemical interactions with various partners. It is important to reliably distinguish different modes of motion to improve our knowledge of the membrane architecture and to understand the nature of interactions between membrane proteins and their environments. Here, we present an analysis technique for single molecule tracking (SMT) trajectories that can determine the preferred model of motion that best matches observed trajectories. The method is based on Bayesian inference to calculate the posteriori probability of an observed trajectory according to a certain model. Information theory criteria, such as the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and modified AIC (AICc), are used to select the preferred model. The considered group of models includes free Brownian motion, and confined motion in 2nd or 4th order potentials. We determine the best information criteria for classifying trajectories. We tested its limits through simulations matching large sets of experimental conditions and we built a decision tree. This decision tree first uses the BIC to distinguish between free Brownian motion and confined motion. In a second step, it classifies the confining potential further using the AIC. We apply the method to experimental Clostridium Perfingens -toxin (CPT) receptor trajectories to show that these receptors are confined by a spring-like potential. An adaptation of this technique was applied on a sliding window in the temporal dimension along the trajectory. We applied this adaptation to experimental CPT trajectories that lose confinement due to disaggregation of confining domains. This new technique adds another dimension to the discussion of SMT data. The mode of motion of a receptor might hold more biologically relevant information than the diffusion coefficient or domain size and may be a better tool to classify and compare different SMT experiments. PMID:24376584
NASA Astrophysics Data System (ADS)
Robinson, Tyler D.; Crisp, David
2018-05-01
Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.
A Review of Imaging Techniques for Plant Phenotyping
Li, Lei; Zhang, Qin; Huang, Danfeng
2014-01-01
Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review. PMID:25347588
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Spectroscopic vector analysis for fast pattern quality monitoring
NASA Astrophysics Data System (ADS)
Sohn, Younghoon; Ryu, Sungyoon; Lee, Chihoon; Yang, Yusin
2018-03-01
In semiconductor industry, fast and effective measurement of pattern variation has been key challenge for assuring massproduct quality. Pattern measurement techniques such as conventional CD-SEMs or Optical CDs have been extensively used, but these techniques are increasingly limited in terms of measurement throughput and time spent in modeling. In this paper we propose time effective pattern monitoring method through the direct spectrum-based approach. In this technique, a wavelength band sensitive to a specific pattern change is selected from spectroscopic ellipsometry signal scattered by pattern to be measured, and the amplitude and phase variation in the wavelength band are analyzed as a measurement index of the pattern change. This pattern change measurement technique is applied to several process steps and verified its applicability. Due to its fast and simple analysis, the methods can be adapted to the massive process variation monitoring maximizing measurement throughput.
Adaptive wall technology for minimization of wall interferences in transonic wind tunnels
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.
1988-01-01
Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.
Granular Flow Graph, Adaptive Rule Generation and Tracking.
Pal, Sankar Kumar; Chakraborty, Debarati Bhunia
2017-12-01
A new method of adaptive rule generation in granular computing framework is described based on rough rule base and granular flow graph, and applied for video tracking. In the process, several new concepts and operations are introduced, and methodologies formulated with superior performance. The flow graph enables in defining an intelligent technique for rule base adaptation where its characteristics in mapping the relevance of attributes and rules in decision-making system are exploited. Two new features, namely, expected flow graph and mutual dependency between flow graphs are defined to make the flow graph applicable in the tasks of both training and validation. All these techniques are performed in neighborhood granular level. A way of forming spatio-temporal 3-D granules of arbitrary shape and size is introduced. The rough flow graph-based adaptive granular rule-based system, thus produced for unsupervised video tracking, is capable of handling the uncertainties and incompleteness in frames, able to overcome the incompleteness in information that arises without initial manual interactions and in providing superior performance and gaining in computation time. The cases of partial overlapping and detecting the unpredictable changes are handled efficiently. It is shown that the neighborhood granulation provides a balanced tradeoff between speed and accuracy as compared to pixel level computation. The quantitative indices used for evaluating the performance of tracking do not require any information on ground truth as in the other methods. Superiority of the algorithm to nonadaptive and other recent ones is demonstrated extensively.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Planning Beyond the Next Trial in Adaptive Experiments: A Dynamic Programming Approach.
Kim, Woojae; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I
2017-11-01
Experimentation is at the heart of scientific inquiry. In the behavioral and neural sciences, where only a limited number of observations can often be made, it is ideal to design an experiment that leads to the rapid accumulation of information about the phenomenon under study. Adaptive experimentation has the potential to accelerate scientific progress by maximizing inferential gain in such research settings. To date, most adaptive experiments have relied on myopic, one-step-ahead strategies in which the stimulus on each trial is selected to maximize inference on the next trial only. A lingering question in the field has been how much additional benefit would be gained by optimizing beyond the next trial. A range of technical challenges has prevented this important question from being addressed adequately. This study applies dynamic programming (DP), a technique applicable for such full-horizon, "global" optimization, to model-based perceptual threshold estimation, a domain that has been a major beneficiary of adaptive methods. The results provide insight into conditions that will benefit from optimizing beyond the next trial. Implications for the use of adaptive methods in cognitive science are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Model and experiments to optimize co-adaptation in a simplified myoelectric control system.
Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A
2018-04-01
To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Spatially variant apodization for squinted synthetic aperture radar images.
Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo
2007-08-01
Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.
Fully customized placement of orthodontic miniplates: a novel clinical technique
2014-01-01
Introduction The initial stability and survival rate of orthodontic mini-implants are highly dependent on the amount of cortical bone at their insertion site. In areas with limited bone availability, mini-plates are preferred to provide effective skeletal anchorage. The purpose of this paper was to present a new clinical technique for the insertion of mini-plates. Methods In order to apply this new technique, a cone-beam image of the insertion area is required. A software (Galaxy Sirona, Bensheim, Germany) is used to construct a three-dimensional image of the scanned area and to virtually determine the exact location of the mini-plate as well as the position of the fixation screws. A stereolithographic model (STL) is then created by means of a three-dimensional scanner. Prior to its surgical insertion, the bone plate is adapted to the stereo-lithographic model. Finally, a custom transfer jig is fabricated in order to assist with accurate placement of the mini-plate intra-operatively. Results The presented technique minimizes intra-operative decision making, because the final position of the bone plate is determined pre-surgically. This significantly reduces the duration of the surgical procedure and improves its outcome. Conclusions A novel method for surgical placement of orthodontic mini-plates is presented. The technique facilitates accurate adaptation of mini-plates and insertion of retaining surgical screws; thereby enabling clinicians to more confidently increase the use of bone plates, especially in anatomical areas where the success of non-osseointegrated mini-screws is less favorable. PMID:24886597
An adaptive learning control system for large flexible structures
NASA Technical Reports Server (NTRS)
Thau, F. E.
1985-01-01
The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.
Wall interference tests of a CAST 10-2/DOA 2 airfoil in an adaptive-wall test section
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
1987-01-01
A wind-tunnel investigation of a CAST 10-2/DOA 2 airfoil model has been conducted in the adaptive-wall test section of the Langley 0.3-Meter Transonic Cryogenic Tunnel (TCT) and in the National Aeronautical Establishment High Reynolds Number Two-Dimensional Test Facility. The primary goal of the tests was to assess two different wall-interference correction techniques: adaptive test-section walls and classical analytical corrections. Tests were conducted over a Mach number range from 0.3 to 0.8 and over a chord Reynolds number range from 6 million to 70 million. The airfoil aerodynamic characteristics from the tests in the 0.3-m TCT have been corrected for wall interference by the movement of the adaptive walls. No additional corrections for any residual interference have been applied to the data, to allow comparison with the classically corrected data from the same model in the conventional National Aeronautical Establishment facility. The data are presented graphically in this report as integrated force-and-moment coefficients and chordwise pressure distributions.
Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm
NASA Astrophysics Data System (ADS)
Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong
2018-06-01
The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.
Projection Operator: A Step Towards Certification of Adaptive Controllers
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Adaptive coding of MSS imagery. [Multi Spectral band Scanners
NASA Technical Reports Server (NTRS)
Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.
1977-01-01
A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.
Hopkins, D S; Phoenix, R D; Abrahamsen, T C
1997-09-01
A technique for the fabrication of light-activated maxillary record bases is described. The use of a segmental polymerization process provides improved palatal adaptation by minimizing the effects of polymerization shrinkage. Utilization of this technique results in record bases that are well adapted to the corresponding master casts.
Planning Complex Projects Automatically
NASA Technical Reports Server (NTRS)
Henke, Andrea L.; Stottler, Richard H.; Maher, Timothy P.
1995-01-01
Automated Manifest Planner (AMP) computer program applies combination of artificial-intelligence techniques to assist both expert and novice planners, reducing planning time by orders of magnitude. Gives planners flexibility to modify plans and constraints easily, without need for programming expertise. Developed specifically for planning space shuttle missions 5 to 10 years ahead, with modifications, applicable in general to planning other complex projects requiring scheduling of activities depending on other activities and/or timely allocation of resources. Adaptable to variety of complex scheduling problems in manufacturing, transportation, business, architecture, and construction.
Identifying sediment sources in the sediment TMDL process
Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph P.; Landy, R.B.; Gorman Sanisaca, Lillian E.
2015-01-01
Sediment is an important pollutant contributing to aquatic-habitat degradation in many waterways of the United States. This paper discusses the application of sediment budgets in conjunction with sediment fingerprinting as tools to determine the sources of sediment in impaired waterways. These approaches complement monitoring, assessment, and modeling of sediment erosion, transport, and storage in watersheds. Combining the sediment fingerprinting and sediment budget approaches can help determine specific adaptive management plans and techniques applied to targeting hot spots or areas of high erosion.
Diagnostic Molecular Microbiology: A 2018 Snapshot.
Fairfax, Marilynn Ransom; Bluth, Martin H; Salimnia, Hossein
2018-06-01
Molecular biological techniques have evolved expeditiously and in turn have been applied to the detection of infectious disease. Maturation of these technologies and their coupling with related technological advancement in fluorescence, electronics, digitization, nanodynamics, and sensors among others have afforded clinical medicine additional tools toward expedient identification of infectious organisms at concentrations and sensitivities previously unattainable. These advancements have been adapted in select settings toward addressing clinical demands for more timely and effective patient management. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Floyd, A.; Liljedahl, A. K.; Gens, R.; Prakash, A.; Mann, D. H.
2011-12-01
A combined use of remote sensing techniques, modeling and in-situ measurements is a pragmatic approach to study arctic hydrology, given the vastness, complexity, and logistical challenges posed by most arctic watersheds. Remote sensing techniques can provide tools to assess the geospatial variations that form the integrated response of a river system and therefore provide important details to study climate change effects on the remote arctic environment. The proposed study tests the applicability of remote sensing and modeling techniques to map, monitor and compare river temperatures and river break-up in the coastal and foothill sections of the Kuparak River, which is an intensely studied watershed. We co-registered about hundred synthetic aperture radar (SAR) images from RADARSAT-1, ERS-1 and ERS-2 satellites, acquired during the months of May through July for a period between 1999 and 2010. Co-registration involved a Fast Fourier Transform (FFT) match of amplitude images. The offsets were then applied to the radiometrically corrected SAR images, converted to dB values, to generate an image stack. We applied a mask to extract pixels representing only the river, and used an adaptive threshold to delineate open water from frozen areas. The variation in river break-up can be bracketed by defining open vs. frozen river conditions. Summer river surface water temperatures will be simulated through the well-established HEC-RAS hydrologic software package and validated with field measurements. The three-pronged approach of using remote sensing, modeling and field measurements demonstrated in this study can be adapted to work for other watersheds across the Arctic.
Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays
Li, David Day-Uei; Ameer-Beg, Simon; Arlt, Jochen; Tyndall, David; Walker, Richard; Matthews, Daniel R.; Visitkul, Viput; Richardson, Justin; Henderson, Robert K.
2012-01-01
We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD)-based cameras for fluorescence lifetime imaging microscopy (FLIM) by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber) are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast. PMID:22778606
A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.
Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong
2017-04-01
This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.
A Robust Deep Model for Improved Classification of AD/MCI Patients
Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang
2015-01-01
Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998
García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María
2015-01-01
The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
Bidding Agents That Perpetrate Auction Fraud
NASA Astrophysics Data System (ADS)
Trevathan, Jarrod; McCabe, Alan; Read, Wayne
This paper presents a software bidding agent that inserts fake bids on the seller's behalf to inflate an auction's price. This behaviour is referred to as shill bidding. Shill bidding is strictly prohibited by online auctioneers, as it defrauds unsuspecting buyers by forcing them to pay more for the item. The malicious bidding agent was constructed to aid in developing shill detection techniques. We have previously documented a simple shill bidding agent that incrementally increases the auction price until it reaches the desired profit target, or it becomes too risky to continue bidding. This paper presents an adaptive shill bidding agent which when used over a series of auctions with substitutable items, can revise its strategy based on bidding behaviour in past auctions. The adaptive agent applies a novel prediction technique referred to as the Extremum Consistency (EC) algorithm, to determine the optimal price to aspire for. The EC algorithm has successfully been used in handwritten signature verification for determining the maximum and minimum values in an input stream. The agent's ability to inflate the price has been tested in a simulated marketplace and experimental results are presented.
Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment
NASA Astrophysics Data System (ADS)
Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.
2010-02-01
Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
An efficient incremental learning mechanism for tracking concept drift in spam filtering
Sheu, Jyh-Jian; Chu, Ko-Tsung; Li, Nien-Feng; Lee, Cheng-Chi
2017-01-01
This research manages in-depth analysis on the knowledge about spams and expects to propose an efficient spam filtering method with the ability of adapting to the dynamic environment. We focus on the analysis of email’s header and apply decision tree data mining technique to look for the association rules about spams. Then, we propose an efficient systematic filtering method based on these association rules. Our systematic method has the following major advantages: (1) Checking only the header sections of emails, which is different from those spam filtering methods at present that have to analyze fully the email’s content. Meanwhile, the email filtering accuracy is expected to be enhanced. (2) Regarding the solution to the problem of concept drift, we propose a window-based technique to estimate for the condition of concept drift for each unknown email, which will help our filtering method in recognizing the occurrence of spam. (3) We propose an incremental learning mechanism for our filtering method to strengthen the ability of adapting to the dynamic environment. PMID:28182691
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
NASA Astrophysics Data System (ADS)
Mohd Yunos, Zuriahati; Shamsuddin, Siti Mariyam; Ismail, Noriszura; Sallehuddin, Roselina
2013-04-01
Artificial neural network (ANN) with back propagation algorithm (BP) and ANFIS was chosen as an alternative technique in modeling motor insurance claims. In particular, an ANN and ANFIS technique is applied to model and forecast the Malaysian motor insurance data which is categorized into four claim types; third party property damage (TPPD), third party bodily injury (TPBI), own damage (OD) and theft. This study is to determine whether an ANN and ANFIS model is capable of accurately predicting motor insurance claim. There were changes made to the network structure as the number of input nodes, number of hidden nodes and pre-processing techniques are also examined and a cross-validation technique is used to improve the generalization ability of ANN and ANFIS models. Based on the empirical studies, the prediction performance of the ANN and ANFIS model is improved by using different number of input nodes and hidden nodes; and also various sizes of data. The experimental results reveal that the ANFIS model has outperformed the ANN model. Both models are capable of producing a reliable prediction for the Malaysian motor insurance claims and hence, the proposed method can be applied as an alternative to predict claim frequency and claim severity.
NASA Astrophysics Data System (ADS)
Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan
2018-05-01
A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.
AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System
NASA Astrophysics Data System (ADS)
Wang, R.; Harris, C.; Wicenec, A.
2016-07-01
In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.
de Lamare, Rodrigo C; Sampaio-Neto, Raimundo
2008-11-01
A space-time adaptive decision feedback (DF) receiver using recurrent neural networks (RNNs) is proposed for joint equalization and interference suppression in direct-sequence code-division multiple-access (DS-CDMA) systems equipped with antenna arrays. The proposed receiver structure employs dynamically driven RNNs in the feedforward section for equalization and multiaccess interference (MAI) suppression and a finite impulse response (FIR) linear filter in the feedback section for performing interference cancellation. A data selective gradient algorithm, based upon the set-membership (SM) design framework, is proposed for the estimation of the coefficients of RNN structures and is applied to the estimation of the parameters of the proposed neural receiver structure. Simulation results show that the proposed techniques achieve significant performance gains over existing schemes.
Network analysis of a financial market based on genuine correlation and threshold method
NASA Astrophysics Data System (ADS)
Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.
2011-10-01
A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.
NASA Technical Reports Server (NTRS)
1999-01-01
F&S Inc. developed and commercialized fiber optic and microelectromechanical systems- (MEMS) based instrumentation for harsh environments encountered in the aerospace industry. The NASA SBIR programs have provided F&S the funds and the technology to develop ruggedized coatings and coating techniques that are applied during the optical fiber draw process. The F&S optical fiber fabrication facility and developed coating methods enable F&S to manufacture specialty optical fiber with custom designed refractive index profiles and protective or active coatings. F&S has demonstrated sputtered coatings using metals and ceramics and combinations of each, and has also developed techniques to apply thin coatings of specialized polyimides formulated at NASA Langley Research Center. With these capabilities, F&S has produced cost-effective, reliable instrumentation and sensors capable of withstanding temperatures up to 800? C and continues building commercial sales with corporate partners and private funding. More recently, F&S has adapted the same sensing platforms to provide the rapid detection and identification of chemical and biological agents
Using cluster analysis for medical resource decision making.
Dilts, D; Khamalah, J; Plotkin, A
1995-01-01
Escalating costs of health care delivery have in the recent past often made the health care industry investigate, adapt, and apply those management techniques relating to budgeting, resource control, and forecasting that have long been used in the manufacturing sector. A strategy that has contributed much in this direction is the definition and classification of a hospital's output into "products" or groups of patients that impose similar resource or cost demands on the hospital. Existing classification schemes have frequently employed cluster analysis in generating these groupings. Unfortunately, the myriad articles and books on clustering and classification contain few formalized selection methodologies for choosing a technique for solving a particular problem, hence they often leave the novice investigator at a loss. This paper reviews the literature on clustering, particularly as it has been applied in the medical resource-utilization domain, addresses the critical choices facing an investigator in the medical field using cluster analysis, and offers suggestions (using the example of clustering low-vision patients) for how such choices can be made.
3D multiscale crack propagation using the XFEM applied to a gas turbine blade
NASA Astrophysics Data System (ADS)
Holl, Matthias; Rogge, Timo; Loehnert, Stefan; Wriggers, Peter; Rolfes, Raimund
2014-01-01
This work presents a new multiscale technique to investigate advancing cracks in three dimensional space. This fully adaptive multiscale technique is designed to take into account cracks of different length scales efficiently, by enabling fine scale domains locally in regions of interest, i.e. where stress concentrations and high stress gradients occur. Due to crack propagation, these regions change during the simulation process. Cracks are modeled using the extended finite element method, such that an accurate and powerful numerical tool is achieved. Restricting ourselves to linear elastic fracture mechanics, the -integral yields an accurate solution of the stress intensity factors, and with the criterion of maximum hoop stress, a precise direction of growth. If necessary, the on the finest scale computed crack surface is finally transferred to the corresponding scale. In a final step, the model is applied to a quadrature point of a gas turbine blade, to compute crack growth on the microscale of a real structure.
PSO/ACO algorithm-based risk assessment of human neural tube defects in Heshun County, China.
Liao, Yi Lan; Wang, Jin Feng; Wu, Ji Lei; Wang, Jiao Jiao; Zheng, Xiao Ying
2012-10-01
To develop a new technique for assessing the risk of birth defects, which are a major cause of infant mortality and disability in many parts of the world. The region of interest in this study was Heshun County, the county in China with the highest rate of neural tube defects (NTDs). A hybrid particle swarm optimization/ant colony optimization (PSO/ACO) algorithm was used to quantify the probability of NTDs occurring at villages with no births. The hybrid PSO/ACO algorithm is a form of artificial intelligence adapted for hierarchical classification. It is a powerful technique for modeling complex problems involving impacts of causes. The algorithm was easy to apply, with the accuracy of the results being 69.5%±7.02% at the 95% confidence level. The proposed method is simple to apply, has acceptable fault tolerance, and greatly enhances the accuracy of calculations. Copyright © 2012 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.
IGA: A Simplified Introduction and Implementation Details for Finite Element Users
NASA Astrophysics Data System (ADS)
Agrawal, Vishal; Gautam, Sachin S.
2018-05-01
Isogeometric analysis (IGA) is a recently introduced technique that employs the Computer Aided Design (CAD) concept of Non-uniform Rational B-splines (NURBS) tool to bridge the substantial bottleneck between the CAD and finite element analysis (FEA) fields. The simplified transition of exact CAD models into the analysis alleviates the issues originating from geometrical discontinuities and thus, significantly reduces the design-to-analysis time in comparison to traditional FEA technique. Since its origination, the research in the field of IGA is accelerating and has been applied to various problems. However, the employment of CAD tools in the area of FEA invokes the need of adapting the existing implementation procedure for the framework of IGA. Also, the usage of IGA requires the in-depth knowledge of both the CAD and FEA fields. This can be overwhelming for a beginner in IGA. Hence, in this paper, a simplified introduction and implementation details for the incorporation of NURBS based IGA technique within the existing FEA code is presented. It is shown that with little modifications, the available standard code structure of FEA can be adapted for IGA. For the clear and concise explanation of these modifications, step-by-step implementation of a benchmark plate with a circular hole under the action of in-plane tension is included.
A Systematic Comparison of Data Selection Criteria for SMT Domain Adaptation
Chao, Lidia S.; Lu, Yi; Xing, Junwen
2014-01-01
Data selection has shown significant improvements in effective use of training data by extracting sentences from large general-domain corpora to adapt statistical machine translation (SMT) systems to in-domain data. This paper performs an in-depth analysis of three different sentence selection techniques. The first one is cosine tf-idf, which comes from the realm of information retrieval (IR). The second is perplexity-based approach, which can be found in the field of language modeling. These two data selection techniques applied to SMT have been already presented in the literature. However, edit distance for this task is proposed in this paper for the first time. After investigating the individual model, a combination of all three techniques is proposed at both corpus level and model level. Comparative experiments are conducted on Hong Kong law Chinese-English corpus and the results indicate the following: (i) the constraint degree of similarity measuring is not monotonically related to domain-specific translation quality; (ii) the individual selection models fail to perform effectively and robustly; but (iii) bilingual resources and combination methods are helpful to balance out-of-vocabulary (OOV) and irrelevant data; (iv) finally, our method achieves the goal to consistently boost the overall translation performance that can ensure optimal quality of a real-life SMT system. PMID:24683356
Magnetic tweezers for the measurement of twist and torque.
Lipfert, Jan; Lee, Mina; Ordu, Orkide; Kerssemakers, Jacob W J; Dekker, Nynke H
2014-05-19
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a "conventional" magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the "conventional" magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
Jhin, Changho; Hwang, Keum Taek
2014-01-01
Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627
Calculating intensities using effective Hamiltonians in terms of Coriolis-adapted normal modes.
Karthikeyan, S; Krishnan, Mangala Sunder; Carrington, Tucker
2005-01-15
The calculation of rovibrational transition energies and intensities is often hampered by the fact that vibrational states are strongly coupled by Coriolis terms. Because it invalidates the use of perturbation theory for the purpose of decoupling these states, the coupling makes it difficult to analyze spectra and to extract information from them. One either ignores the problem and hopes that the effect of the coupling is minimal or one is forced to diagonalize effective rovibrational matrices (rather than diagonalizing effective rotational matrices). In this paper we apply a procedure, based on a quantum mechanical canonical transformation for deriving decoupled effective rotational Hamiltonians. In previous papers we have used this technique to compute energy levels. In this paper we show that it can also be applied to determine intensities. The ideas are applied to the ethylene molecule.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
Eliciting adaptive emotion in conversations with parents of children receiving therapy for leukemia.
Tremolada, Marta; Bonichini, Sabrina; Pillon, Marta; Schiavo, Simone; Carli, Modesto
2011-01-01
Clinician-parent communication may often be difficult, especially soon after the diagnosis. The aims of this article are to identify the communication strategies associated with expressions of adaptive emotions in parents and to explore the effect of the type of leukemia and of parent's gender on parents' expressions of emotions. The data are obtained from 4.622 conversational turns of 20 videotaped interviews with 10 mothers and 10 fathers of children at their first hospitalization for leukemia. A coding scheme for parent emotional expressions was reliably applied by two independent judges. An original self-report questionnaire on parents' emotional states was used before and after the interview. Positive politeness of interviewer elicits adaptive emotional expressions in parents. Mothers of children with acute myeloid leukemia and fathers of children with acute lymphoblastic leukaemia appear more distressed during the interview. This interview can be identified as an innovative technique of communication with parents of children with cancer.
In situ 3D nanoprinting of free-form coupling elements for hybrid photonic integration
NASA Astrophysics Data System (ADS)
Dietrich, P.-I.; Blaicher, M.; Reuter, I.; Billah, M.; Hoose, T.; Hofmann, A.; Caer, C.; Dangel, R.; Offrein, B.; Troppenz, U.; Moehrle, M.; Freude, W.; Koos, C.
2018-04-01
Hybrid photonic integration combines complementary advantages of different material platforms, offering superior performance and flexibility compared with monolithic approaches. This applies in particular to multi-chip concepts, where components can be individually optimized and tested. The assembly of such systems, however, requires expensive high-precision alignment and adaptation of optical mode profiles. We show that these challenges can be overcome by in situ printing of facet-attached beam-shaping elements. Our approach allows precise adaptation of vastly dissimilar mode profiles and permits alignment tolerances compatible with cost-efficient passive assembly techniques. We demonstrate a selection of beam-shaping elements at chip and fibre facets, achieving coupling efficiencies of up to 88% between edge-emitting lasers and single-mode fibres. We also realize printed free-form mirrors that simultaneously adapt beam shape and propagation direction, and we explore multi-lens systems for beam expansion. The concept paves the way to automated assembly of photonic multi-chip systems with unprecedented performance and versatility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Zhu, Feng; Ukkusuri, Satish V.
Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better atmore » higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO 2, NO x, VOC, PM 10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.« less
Sadjadi, Firooz A; Mahalanobis, Abhijit
2006-05-01
We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
Dual adaptive dynamic control of mobile robots using neural networks.
Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato
2009-02-01
This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.
HyFIS: adaptive neuro-fuzzy inference systems and their application to nonlinear dynamical systems.
Kim, J; Kasabov, N
1999-11-01
This paper proposes an adaptive neuro-fuzzy system, HyFIS (Hybrid neural Fuzzy Inference System), for building and optimising fuzzy models. The proposed model introduces the learning power of neural networks to fuzzy logic systems and provides linguistic meaning to the connectionist architectures. Heuristic fuzzy logic rules and input-output fuzzy membership functions can be optimally tuned from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data; and rule tuning phase using error backpropagation learning scheme for a neural fuzzy system. To illustrate the performance and applicability of the proposed neuro-fuzzy hybrid model, extensive simulation studies of nonlinear complex dynamic systems are carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction and control of nonlinear dynamical systems. Two benchmark case studies are used to demonstrate that the proposed HyFIS system is a superior neuro-fuzzy modelling technique.
Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea
2017-07-18
The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.
1064 nm FT-Raman spectroscopy for investigations of plant cell walls and other biomass materials
Agarwal, Umesh P.
2014-01-01
Raman spectroscopy with its various special techniques and methods has been applied to study plant biomass for about 30 years. Such investigations have been performed at both macro- and micro-levels. However, with the availability of the Near Infrared (NIR) (1064 nm) Fourier Transform (FT)-Raman instruments where, in most materials, successful fluorescence suppression can be achieved, the utility of the Raman investigations has increased significantly. Moreover, the development of several new capabilities such as estimation of cellulose-crystallinity, ability to analyze changes in cellulose conformation at the local and molecular level, and examination of water-cellulose interactions have made this technique essential for research in the field of plant science. The FT-Raman method has also been applied to research studies in the arenas of biofuels and nanocelluloses. Moreover, the ability to investigate plant lignins has been further refined with the availability of near-IR Raman. In this paper, we present 1064-nm FT-Raman spectroscopy methodology to investigate various compositional and structural properties of plant material. It is hoped that the described studies will motivate the research community in the plant biomass field to adapt this technique to investigate their specific research needs. PMID:25295049
Kashiwayanagi, M; Shimano, K; Kurihara, K
1996-11-04
The responses of single bullfrog olfactory neurons to various odorants were measured with the whole-cell patch clamp which offers direct information on cellular events and with the ciliary recording technique to obtain stable quantitative data from many neurons. A large portion of single olfactory neurons (about 64% and 79% in the whole-cell recording and in the ciliary recording, respectively) responded to many odorants with quite diverse molecular structures, including both odorants previously indicated to be cAMP-dependent (increasing) and independent odorants. One odorant elicited a response in many cells; e.g. hedione and citralva elicited the response in 100% and 92% of total neurons examined with the ciliary recording technique. To confirm that a single neuron carries different receptors or transduction pathways, the cross-adaptation technique was applied to single neurons. Application of hedione to a single neuron after desensitization of the current in response to lyral or citralva induced an inward current with a similar magnitude to that applied alone. It was suggested that most single olfactory neurons carry multiple receptors and at least dual transduction pathways.
NASA Astrophysics Data System (ADS)
Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo
2017-11-01
The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
A novel bit-wise adaptable entropy coding technique
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.
NASA Technical Reports Server (NTRS)
Dufrene, Warren R., Jr.
2004-01-01
This paper describes the development of a planned approach for Autonomous operation of an Unmanned Aerial Vehicle (UAV). A Hybrid approach will seek to provide Knowledge Generation through the application of Artificial Intelligence (AI) and Intelligent Agents (IA) for UAV control. The applications of several different types of AI techniques for flight are explored during this research effort. The research concentration is directed to the application of different AI methods within the UAV arena. By evaluating AI and biological system approaches. which include Expert Systems, Neural Networks. Intelligent Agents, Fuzzy Logic, and Complex Adaptive Systems, a new insight may be gained into the benefits of AI and CAS techniques applied to achieving true autonomous operation of these systems. Although flight systems were explored, the benefits should apply to many Unmanned Vehicles such as: Rovers. Ocean Explorers, Robots, and autonomous operation systems. A portion of the flight system is broken down into control agents that represent the intelligent agent approach used in AI. After the completion of a successful approach, a framework for applying an intelligent agent is presented. The initial results from simulation of a security agent for communication are presented.
Artificial intelligence in sports on the example of weight training.
Novatchkov, Hristo; Baca, Arnold
2013-01-01
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.
Artificial Intelligence in Sports on the Example of Weight Training
Novatchkov, Hristo; Baca, Arnold
2013-01-01
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722
Geophysical monitoring in a hydrocarbon reservoir
NASA Astrophysics Data System (ADS)
Caffagni, Enrico; Bokelmann, Goetz
2016-04-01
Extraction of hydrocarbons from reservoirs demands ever-increasing technological effort, and there is need for geophysical monitoring to better understand phenomena occurring within the reservoir. Significant deformation processes happen when man-made stimulation is performed, in combination with effects deriving from the existing natural conditions such as stress regime in situ or pre-existing fracturing. Keeping track of such changes in the reservoir is important, on one hand for improving recovery of hydrocarbons, and on the other hand to assure a safe and proper mode of operation. Monitoring becomes particularly important when hydraulic-fracturing (HF) is used, especially in the form of the much-discussed "fracking". HF is a sophisticated technique that is widely applied in low-porosity geological formations to enhance the production of natural hydrocarbons. In principle, similar HF techniques have been applied in Europe for a long time in conventional reservoirs, and they will probably be intensified in the near future; this suggests an increasing demand in technological development, also for updating and adapting the existing monitoring techniques in applied geophysics. We review currently available geophysical techniques for reservoir monitoring, which appear in the different fields of analysis in reservoirs. First, the properties of the hydrocarbon reservoir are identified; here we consider geophysical monitoring exclusively. The second step is to define the quantities that can be monitored, associated to the properties. We then describe the geophysical monitoring techniques including the oldest ones, namely those in practical usage from 40-50 years ago, and the most recent developments in technology, within distinct groups, according to the application field of analysis in reservoir. This work is performed as part of the FracRisk consortium (www.fracrisk.eu); this project, funded by the Horizon2020 research programme, aims at helping minimize the environmental footprint of the shale-gas exploration and exploitation.
Application of simple adaptive control to water hydraulic servo cylinder system
NASA Astrophysics Data System (ADS)
Ito, Kazuhisa; Yamada, Tsuyoshi; Ikeo, Shigeru; Takahashi, Koji
2012-09-01
Although conventional model reference adaptive control (MRAC) achieves good tracking performance for cylinder control, the controller structure is much more complicated and has less robustness to disturbance in real applications. This paper discusses the use of simple adaptive control (SAC) for positioning a water hydraulic servo cylinder system. Compared with MRAC, SAC has a simpler and lower order structure, i.e., higher feasibility. The control performance of SAC is examined and evaluated on a water hydraulic servo cylinder system. With the recent increased concerns over global environmental problems, the water hydraulic technique using pure tap water as a pressure medium has become a new drive source comparable to electric, oil hydraulic, and pneumatic drive systems. This technique is also preferred because of its high power density, high safety against fire hazards in production plants, and easy availability. However, the main problems for precise control in a water hydraulic system are steady state errors and overshoot due to its large friction torque and considerable leakage flow. MRAC has been already applied to compensate for these effects, and better control performances have been obtained. However, there have been no reports on the application of SAC for water hydraulics. To make clear the merits of SAC, the tracking control performance and robustness are discussed based on experimental results. SAC is confirmed to give better tracking performance compared with PI control, and a control precision comparable to MRAC (within 10 μm of the reference position) and higher robustness to parameter change, despite the simple controller. The research results ensure a wider application of simple adaptive control in real mechanical systems.
Electrochemical Processes Enhanced by Acoustic Liquid Manipulation
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.
2004-01-01
Acoustic liquid manipulation is a family of techniques that employ the nonlinear acoustic effects of acoustic radiation pressure and acoustic streaming to manipulate the behavior of liquids. Researchers at the NASA Glenn Research Center are exploring new methods of manipulating liquids for a variety of space applications, and we have found that acoustic techniques may also be used in the normal Earth gravity environment to enhance the performance of existing fluid processes. Working in concert with the NASA Commercial Technology Office, the Great Lakes Industrial Technology Center, and Alchemitron Corporation (Elgin, IL), researchers at Glenn have applied nonlinear acoustic principles to industrial applications. Collaborating with Alchemitron Corporation, we have adapted the devices to create acoustic streaming in a conventional electroplating process.
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Swain, P. H. (Principal Investigator); Vardeman, S. B.
1981-01-01
A key input to a statistical classification algorithm, which exploits the tendency of certain ground cover classes to occur more frequently in some spatial context than in others, is a statistical characterization of the context: the context distribution. An unbiased estimator of the context distribution is discussed which, besides having the advantage of statistical unbiasedness, has the additional advantage over other estimation techniques of being amenable to an adaptive implementation in which the context distribution estimate varies according to local contextual information. Results from applying the unbiased estimator to the contextual classification of three real LANDSAT data sets are presented and contrasted with results from non-contextual classifications and from contextual classifications utilizing other context distribution estimation techniques.
Report on the ESO Workshop ''Astronomy at High Angular Resolution''
NASA Astrophysics Data System (ADS)
Boffin, H.; Schmidtobreick, L.; Hussain, G.; Berger, J.-Ph.
2015-03-01
A workshop took place in Brussels in 2000 on astrotomography, a generic term for indirect mapping techniques that can be applied to a huge variety of astrophysical systems, ranging from planets, single stars and binaries to active galactic nuclei. It appeared to be timely to revisit the topic given the many past, recent and forthcoming improvements in telescopes and instrumentation. We therefore decided to repeat the astrotomography workshop, but to put it into the much broader context of high angular resolution astronomy. Many techniques, from lucky and speckle imaging, adaptive optics to interferometry, are now widely employed to achieve high angular resolution and they have led to an amazing number of new discoveries. A summary of the workshop themes is presented.
DNAism: exploring genomic datasets on the web with Horizon Charts.
Rio Deiros, David; Gibbs, Richard A; Rogers, Jeffrey
2016-01-27
Computational biologists daily face the need to explore massive amounts of genomic data. New visualization techniques can help researchers navigate and understand these big data. Horizon Charts are a relatively new visualization method that, under the right circumstances, maximizes data density without losing graphical perception. Horizon Charts have been successfully applied to understand multi-metric time series data. We have adapted an existing JavaScript library (Cubism) that implements Horizon Charts for the time series domain so that it works effectively with genomic datasets. We call this new library DNAism. Horizon Charts can be an effective visual tool to explore complex and large genomic datasets. Researchers can use our library to leverage these techniques to extract additional insights from their own datasets.
Data-adaptive harmonic analysis and prediction of sea level change in North Atlantic region
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.
2017-12-01
This study aims to characterize North Atlantic sea level variability across the temporal and spatial scales. We apply recently developed data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) stochastic modeling techniques [Chekroun and Kondrashov, 2017] to monthly 1993-2017 dataset of Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM altimetry fields over North Atlantic region. The key numerical feature of the DAH relies on the eigendecomposition of a matrix constructed from time-lagged spatial cross-correlations. In particular, eigenmodes form an orthogonal set of oscillating data-adaptive harmonic modes (DAHMs) that come in pairs and in exact phase quadrature for a given temporal frequency. Furthermore, the pairs of data-adaptive harmonic coefficients (DAHCs), obtained by projecting the dataset onto associated DAHMs, can be very efficiently modeled by a universal parametric family of simple nonlinear stochastic models - coupled Stuart-Landau oscillators stacked per frequency, and synchronized across different frequencies by the stochastic forcing. Despite the short record of altimetry dataset, developed DAH-MSLM model provides for skillful prediction of key dynamical and statistical features of sea level variability. References M. D. Chekroun and D. Kondrashov, Data-adaptive harmonic spectra and multilayer Stuart-Landau models. HAL preprint, 2017, https://hal.archives-ouvertes.fr/hal-01537797
Sarhadi, Pouria; Noei, Abolfazl Ranjbar; Khosravi, Alireza
2016-11-01
Input saturations and uncertain dynamics are among the practical challenges in control of autonomous vehicles. Adaptive control is known as a proper method to deal with the uncertain dynamics of these systems. Therefore, incorporating the ability to confront with input saturation in adaptive controllers can be valuable. In this paper, an adaptive autopilot is presented for the pitch and yaw channels of an autonomous underwater vehicle (AUV) in the presence of input saturations. This will be performed by combination of a model reference adaptive control (MRAC) with integral state feedback with a modern anti-windup (AW) compensator. MRAC with integral state feedback is commonly used in autonomous vehicles. However, some proper modifications need to be taken into account in order to cope with the saturation problem. To this end, a Riccati-based anti-windup (AW) compensator is employed. The presented technique is applied to the non-linear six degrees of freedom (DOF) model of an AUV and the obtained results are compared with that of its baseline method. Several simulation scenarios are executed in the pitch and yaw channels to evaluate the controller performance. Moreover, effectiveness of proposed adaptive controller is comprehensively investigated by implementing Monte Carlo simulations. The obtained results verify the performance of proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A New Multi-Agent Approach to Adaptive E-Education
NASA Astrophysics Data System (ADS)
Chen, Jing; Cheng, Peng
Improving customer satisfaction degree is important in e-Education. This paper describes a new approach to adaptive e-Education taking into account the full spectrum of Web service techniques and activities. It presents a multi-agents architecture based on artificial psychology techniques, which makes the e-Education process both adaptable and dynamic, and hence up-to-date. Knowledge base techniques are used to support the e-Education process, and artificial psychology techniques to deal with user psychology, which makes the e-Education system more effective and satisfying.
Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun
2016-01-01
Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
Yi, Chenju; Teillon, Jérémy; Koulakoff, Annette; Berry, Hugues; Giaume, Christian
2018-06-01
Intercellular communication through gap junction channels plays a key role in cellular homeostasis and in synchronizing physiological functions, a feature that is modified in number of pathological situations. In the brain, astrocytes are the cell population that expresses the highest amount of gap junction proteins, named connexins. Several techniques have been used to assess the level of gap junctional communication in astrocytes, but so far they remain very difficult to apply in adult brain tissue. Here, using specific loading of astrocytes with sulforhodamine 101, we adapted the gap-FRAP (Fluorescence Recovery After Photobleaching) to acute hippocampal slices from 9 month-old adult mice. We show that gap junctional communication monitored in astrocytes with this technique was inhibited either by pharmacological treatment with a gap junctional blocker or in mice lacking the two main astroglial connexins, while a partial inhibition was measured when only one connexin was knocked-out. We validate this approach using a mathematical model of sulforhodamine 101 diffusion in an elementary astroglial network and a quantitative analysis of the exponential fits to the fluorescence recovery curves. Consequently, we consider that the adaptation of the gap-FRAP technique to acute brain slices from adult mice provides an easy going and valuable approach that allows overpassing this age-dependent obstacle and will facilitate the investigation of gap junctional communication in adult healthy or pathological brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Automated segmentation of geographic atrophy using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.
2018-02-01
Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.
Networked Airborne Communications Using Adaptive Multi Beam Directional Links
2016-03-05
Networked Airborne Communications Using Adaptive Multi-Beam Directional Links R. Bruce MacLeod Member, IEEE, and Adam Margetts Member, IEEE MIT...provide new techniques for increasing throughput in airborne adaptive directional net- works. By adaptive directional linking, we mean systems that can...techniques can dramatically increase the capacity in airborne networks. Advances in digital array technology are beginning to put these gains within reach
NASA Technical Reports Server (NTRS)
1975-01-01
Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.
[Application of microbial fuel cell (MFC) in solid waste composting].
Cui, Jinxin; Wang, Xin; Tang, Jingchun
2012-03-01
Microbial fuel cell (MFC) is a new technology that can recover energy from biomass with simultaneous waste treatment. This technique has been developed fast in recent years in combining with environmental techniques such as wastewater treatment, degradation of toxic pollutants and desalination. With the increase of solid waste, applying MFC in composting is promising due to its property of waste disposal with simultaneous energy generation. In this paper, the microbial community of MFCs during composting was summarized. Four major influencing factors including electrodes, separators, oxygen supplement and configurations on the performance of composting MFCs were discussed. The characteristics of composting MFC as a new technique for reducing solid waste were as follows: high microbial biomass resulted in the high current density; adaptable to different environmental conditions; self-adjustable temperature with high energy efficiency; the transportation of proton from anode to cathode were limited by different solid substrates.
Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.
2011-04-01
Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.
1991-01-01
An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.
An ANN-Based Smart Tomographic Reconstructor in a Dynamic Environment
de Cos Juez, Francisco J.; Lasheras, Fernando Sánchez; Roqueñí, Nieves; Osborn, James
2012-01-01
In astronomy, the light emitted by an object travels through the vacuum of space and then the turbulent atmosphere before arriving at a ground based telescope. By passing through the atmosphere a series of turbulent layers modify the light's wave-front in such a way that Adaptive Optics reconstruction techniques are needed to improve the image quality. A novel reconstruction technique based in Artificial Neural Networks (ANN) is proposed. The network is designed to use the local tilts of the wave-front measured by a Shack Hartmann Wave-front Sensor (SHWFS) as inputs and estimate the turbulence in terms of Zernike coefficients. The ANN used is a Multi-Layer Perceptron (MLP) trained with simulated data with one turbulent layer changing in altitude. The reconstructor was tested using three different atmospheric profiles and compared with two existing reconstruction techniques: Least Squares type Matrix Vector Multiplication (LS) and Learn and Apply (L + A). PMID:23012524
Applying projective techniques to formative research in health communication development.
Wiehagen, Theresa; Caito, Nicole M; Thompson, Vetta Sanders; Casey, Christopher M; Weaver, Nancy L; Jupka, Keri; Kreuter, Matthew W
2007-04-01
This article describes a new approach to formative research in which projective techniques commonly used in psychological assessment were adapted for use in focus groups to help design colorectal-cancer screening materials for African American men and women. Participants (N = 20) were divided into six "design teams." Each team was given a selection of design supplies and asked to create and discuss a visual layout for screening materials. Participants chose design elements that reflected visual preferences that they felt would connect meaningfully with other African Americans. The dynamics within the design teams were different than in traditional focus groups, with participants having more control over the group's direction. Using projective techniques helped draw out unique information from participants by allowing them to "project" their opinions onto objects. This approach may be a valuable tool for health-promotion and health-communication practitioners seeking insight on the implicit values of a priority population.
Simulation and Modeling in High Entropy Alloys
NASA Astrophysics Data System (ADS)
Toda-Caraballo, I.; Wróbel, J. S.; Nguyen-Manh, D.; Pérez, P.; Rivera-Díaz-del-Castillo, P. E. J.
2017-11-01
High entropy alloys (HEAs) is a fascinating field of research, with an increasing number of new alloys discovered. This would hardly be conceivable without the aid of materials modeling and computational alloy design to investigate the immense compositional space. The simplicity of the microstructure achieved contrasts with the enormous complexity of its composition, which, in turn, increases the variety of property behavior observed. Simulation and modeling techniques are of paramount importance in the understanding of such material performance. There are numerous examples of how different models have explained the observed experimental results; yet, there are theories and approaches developed for conventional alloys, where the presence of one element is predominant, that need to be adapted or re-developed. In this paper, we review of the current state of the art of the modeling techniques applied to explain HEAs properties, identifying the potential new areas of research to improve the predictability of these techniques.
An integrated approach to improving noisy speech perception
NASA Astrophysics Data System (ADS)
Koval, Serguei; Stolbov, Mikhail; Smirnova, Natalia; Khitrov, Mikhail
2002-05-01
For a number of practical purposes and tasks, experts have to decode speech recordings of very poor quality. A combination of techniques is proposed to improve intelligibility and quality of distorted speech messages and thus facilitate their comprehension. Along with the application of noise cancellation and speech signal enhancement techniques removing and/or reducing various kinds of distortions and interference (primarily unmasking and normalization in time and frequency fields), the approach incorporates optimal listener expert tactics based on selective listening, nonstandard binaural listening, accounting for short-term and long-term human ear adaptation to noisy speech, as well as some methods of speech signal enhancement to support speech decoding during listening. The approach integrating the suggested techniques ensures high-quality ultimate results and has successfully been applied by Speech Technology Center experts and by numerous other users, mainly forensic institutions, to perform noisy speech records decoding for courts, law enforcement and emergency services, accident investigation bodies, etc.
In vitro evaluation of marginal adaptation in five ceramic restoration fabricating techniques.
Ural, Cağri; Burgaz, Yavuz; Saraç, Duygu
2010-01-01
To compare in vitro the marginal adaptation of crowns manufactured using ceramic restoration fabricating techniques. Fifty standardized master steel dies simulating molars were produced and divided into five groups, each containing 10 specimens. Test specimens were fabricated with CAD/CAM, heat-press, glass-infiltration, and conventional lost-wax techniques according to manufacturer instructions. Marginal adaptation of the test specimens was measured vertically before and after cementation using SEM. Data were statistically analyzed by one-way ANOVA with Tukey HSD tests (a = .05). Marginal adaptation of ceramic crowns was affected by fabrication technique and cementation process (P < .001). The lowest marginal opening values were obtained with Cerec-3 crowns before and after cementation (P < .001). The highest marginal discrepancy values were obtained with PFM crowns before and after cementation. Marginal adaptation values obtained in the compared systems were within clinically acceptable limits. Cementation causes a significant increase in the vertical marginal discrepancies of the test specimens.
Lin, Amy H; Breger, Tiffany L; Barnhart, Matthew; Kim, Ann; Vangsgaard, Charlotte; Harris, Emily
2014-01-01
In planning for the introduction of vaginal microbicides and other new antiretroviral (ARV)-based prevention products for women, an in-depth understanding of potential end-users will be critically important to inform strategies to optimize uptake and long-term adherence. User-centred private sector companies have contributed to the successful launch of many different types of products, employing methods drawn from behavioural and social sciences to shape product designs, marketing messages and communication channels. Examples of how the private sector has adapted and applied these techniques to make decisions around product messaging and targeting may be instructive for adaptation to microbicide introduction. In preparing to introduce a product, user-centred private sector companies employ diverse methods to understand the target population and their lifestyles, values and motivations. ReD Associates' observational research on user behaviours in the packaged food and diabetes fields illustrates how 'tag along' or 'shadowing' techniques can identify sources of non-adherence. Another open-ended method is self-documentation, and IDEO's mammography research utilized this to uncover user motivations that extended beyond health. Mapping the user journey is a quantitative approach for outlining critical decision-making stages, and Monitor Inclusive Markets applied this framework to identify toilet design opportunities for the rural poor. Through an iterative process, these various techniques can generate hypotheses on user drop-off points, quantify where drop-off is highest and prioritize areas of further research to uncover usage barriers. Although research constraints exist, these types of user-centred techniques have helped create effective messaging, product positioning and packaging of health products as well as family planning information. These methods can be applied to microbicide acceptability testing outside of clinical trials to design microbicide marketing that enhances product usage. The introduction of microbicide products presents an ideal opportunity to draw on the insights from user-centred private sector companies' approaches, which can complement other methods that have been more commonly utilized in microbicide research to date. As microbicides move from clinical trials to real-world implementation, there will be more opportunities to combine a variety of approaches to understand end-users, which can lead to a more effective product launch and ultimately greater impact on preventing HIV infections.
Lin, Amy H; Breger, Tiffany L; Barnhart, Matthew; Kim, Ann; Vangsgaard, Charlotte; Harris, Emily
2014-01-01
Introduction In planning for the introduction of vaginal microbicides and other new antiretroviral (ARV)-based prevention products for women, an in-depth understanding of potential end-users will be critically important to inform strategies to optimize uptake and long-term adherence. User-centred private sector companies have contributed to the successful launch of many different types of products, employing methods drawn from behavioural and social sciences to shape product designs, marketing messages and communication channels. Examples of how the private sector has adapted and applied these techniques to make decisions around product messaging and targeting may be instructive for adaptation to microbicide introduction. Discussion In preparing to introduce a product, user-centred private sector companies employ diverse methods to understand the target population and their lifestyles, values and motivations. ReD Associates’ observational research on user behaviours in the packaged food and diabetes fields illustrates how ‘tag along’ or ‘shadowing’ techniques can identify sources of non-adherence. Another open-ended method is self-documentation, and IDEO's mammography research utilized this to uncover user motivations that extended beyond health. Mapping the user journey is a quantitative approach for outlining critical decision-making stages, and Monitor Inclusive Markets applied this framework to identify toilet design opportunities for the rural poor. Through an iterative process, these various techniques can generate hypotheses on user drop-off points, quantify where drop-off is highest and prioritize areas of further research to uncover usage barriers. Although research constraints exist, these types of user-centred techniques have helped create effective messaging, product positioning and packaging of health products as well as family planning information. These methods can be applied to microbicide acceptability testing outside of clinical trials to design microbicide marketing that enhances product usage. Conclusions The introduction of microbicide products presents an ideal opportunity to draw on the insights from user-centred private sector companies’ approaches, which can complement other methods that have been more commonly utilized in microbicide research to date. As microbicides move from clinical trials to real-world implementation, there will be more opportunities to combine a variety of approaches to understand end-users, which can lead to a more effective product launch and ultimately greater impact on preventing HIV infections. PMID:25224619
Streamflow characterization using functional data analysis of the Potomac River
NASA Astrophysics Data System (ADS)
Zelmanow, A.; Maslova, I.; Ticlavilca, A. M.; McKee, M.
2013-12-01
Flooding and droughts are extreme hydrological events that affect the United States economically and socially. The severity and unpredictability of flooding has caused billions of dollars in damage and the loss of lives in the eastern United States. In this context, there is an urgent need to build a firm scientific basis for adaptation by developing and applying new modeling techniques for accurate streamflow characterization and reliable hydrological forecasting. The goal of this analysis is to use numerical streamflow characteristics in order to classify, model, and estimate the likelihood of extreme events in the eastern United States, mainly the Potomac River. Functional data analysis techniques are used to study yearly streamflow patterns, with the extreme streamflow events characterized via functional principal component analysis. These methods are merged with more classical techniques such as cluster analysis, classification analysis, and time series modeling. The developed functional data analysis approach is used to model continuous streamflow hydrographs. The forecasting potential of this technique is explored by incorporating climate factors to produce a yearly streamflow outlook.
Rutherford, Alexandra
2003-11-01
Behaviorist B.F. Skinner is not typically associated with the fields of personality assessment or projective testing. However, early in his career Skinner developed an instrument he named the verbal summator, which, at one point, he referred to as a device for "snaring out complexes," much like an auditory analogue of the Rorschach inkblots. Skinner's interest in the projective potential of his technique was relatively short lived, but whereas he used the verbal summator to generate experimental data for his theory of verbal behavior, several other clinicians and researchers exploited this potential and adapted the verbal summator technique for both research and applied purposes. The idea of an auditory inkblot struck many as a useful innovation, and the verbal summator spawned the tautophone test, the auditory apperception test, and the Azzageddi test, among others. This article traces the origin, development, and eventual demise of the verbal summator as an auditory projective technique.
Campana, Gianluca; Camilleri, Rebecca; Moret, Beatrice; Ghin, Filippo; Pavan, Andrea
2016-01-01
Transcranial random noise stimulation (tRNS) is a recent neuro-modulation technique whose effects at both behavioural and neural level are still debated. Here we employed the well-known phenomenon of motion after-effect (MAE) in order to investigate the effects of high- vs. low-frequency tRNS on motion adaptation and recovery. Participants were asked to estimate the MAE duration following prolonged adaptation (20 s) to a complex moving pattern, while being stimulated with either sham or tRNS across different blocks. Different groups were administered with either high- or low-frequency tRNS. Stimulation sites were either bilateral human MT complex (hMT+) or frontal areas. The results showed that, whereas no effects on MAE duration were induced by stimulating frontal areas, when applied to the bilateral hMT+, high-frequency tRNS caused a significant decrease in MAE duration whereas low-frequency tRNS caused a significant corresponding increase in MAE duration. These findings indicate that high- and low-frequency tRNS have opposed effects on the adaptation-dependent unbalance between neurons tuned to opposite motion directions, and thus on neuronal excitability. PMID:27934947
Conflict adaptation in positive and negative mood: Applying a success-failure manipulation.
Schuch, Stefanie; Zweerings, Jana; Hirsch, Patricia; Koch, Iring
2017-05-01
Conflict adaptation is a cognitive mechanism denoting increased cognitive control upon detection of conflict. This mechanism can be measured by the congruency sequence effect, indicating the reduction of congruency effects after incongruent trials (where response conflict occurs) relative to congruent trials (without response conflict). Several studies have reported increased conflict adaptation under negative, as compared to positive, mood. In these studies, sustained mood states were induced by film clips or music combined with imagination techniques; these kinds of mood manipulations are highly obvious, possibly distorting the actual mood states experienced by the participants. Here, we report two experiments where mood states were induced in a less obvious way, and with higher ecological validity. Participants received success or failure feedback on their performance in a bogus intelligence test, and this mood manipulation proved highly effective. We largely replicated previous findings of larger conflict adaptation under negative mood than under positive mood, both with a Flanker interference paradigm (Experiment 1) and a Stroop-like interference paradigm (Experiment 2). Results are discussed with respect to current theories on affective influences on cognitive control. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive slit beam shaping for direct laser written waveguides.
Salter, P S; Jesacher, A; Spring, J B; Metcalf, B J; Thomas-Peter, N; Simmonds, R D; Langford, N K; Walmsley, I A; Booth, M J
2012-02-15
We demonstrate an improved method for fabricating optical waveguides in bulk materials by means of femtosecond laser writing. We use an LC spatial light modulator (SLM) to shape the beam focus by generating adaptive slit illumination in the pupil of the objective lens. A diffraction grating is applied in a strip across the SLM to simulate a slit, with the first diffracted order mapped onto the pupil plane of the objective lens while the zeroth order is blocked. This technique enables real-time control of the beam-shaping parameters during writing, facilitating the fabrication of more complicated structures than is possible using nonadaptive methods. Waveguides are demonstrated in fused silica with a coupling loss to single-mode fibers in the range of 0.2 to 0.5 dB and propagation loss <0.4 dB/cm.
User oriented ERTS-1 images. [vegetation identification in Canada through image enhancement
NASA Technical Reports Server (NTRS)
Shlien, S.; Goodenough, D.
1974-01-01
Photographic reproduction of ERTS-1 images are capable of displaying only a portion of the total information available from the multispectral scanner. Methods are being developed to generate ERTS-1 images oriented towards special users such as agriculturists, foresters, and hydrologists by applying image enhancement techniques and interactive statistical classification schemes. Spatial boundaries and linear features can be emphasized and delineated using simple filters. Linear and nonlinear transformations can be applied to the spectral data to emphasize certain ground information. An automatic classification scheme was developed to identify particular ground cover classes such as fallow, grain, rape seed or various vegetation covers. The scheme applies the maximum likelihood decision rule to the spectral information and classifies the ERTS-1 image on a pixel by pixel basis. Preliminary results indicate that the classifier has limited success in distinguishing crops, but is well adapted for identifying different types of vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bueno, G.; Ruiz, M.; Sanchez, S
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
NASA Technical Reports Server (NTRS)
Rodriquez, Jose M.; Hu, Wenjie; Ko, Malcolm K. W.
1995-01-01
We proposed model-data intercomparison studies for UARS data. In the past three months, we have been working on constructing analysis tools to diagnose the UARS data. The 'Trajectory mapping' technique, which was developed by Morris (1994), is adaptable to generate synoptic maps of trace gas data from asynoptic observations. An in-house trajectory model (kinematic methods following Merrill et al., 1986 and Pickering et al., 1994) has been developed in AER under contract with NASA/ACMAP and the trajectory mapping tool has been applied to analyze UARS measurement.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
Stochastic Game Analysis and Latency Awareness for Self-Adaptation
2014-01-01
this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to
Model and experiments to optimize co-adaptation in a simplified myoelectric control system
NASA Astrophysics Data System (ADS)
Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P. Y.; de Rugy, A.
2018-04-01
Objective. To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. Approach. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. Results. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. Significance. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
An adaptive front tracking technique for three-dimensional transient flows
NASA Astrophysics Data System (ADS)
Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.
2000-01-01
An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright
Unstructured mesh generation and adaptivity
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
Kropacheva, Marya; Melgunov, Mikhail; Makarova, Irina
2017-02-01
The study of migration pathways of artificial isotopes in the flood-plain biogeocoenoses, impacted by the nuclear fuel cycle plants, requires determination of isotope speciations in the biomass of higher terrestrial plants. The optimal method for their determination is the sequential elution technique (SET). The technique was originally developed to study atmospheric pollution by metals and has been applied to lichens, terrestrial and aquatic bryophytes. Due to morphological and physiological differences, it was necessary to adapt SET for new objects: coastal macrophytes growing on the banks of the Yenisei flood-plain islands in the near impact zone of Krasnoyarsk Mining and Chemical Combine (KMCC). In the first version of SET, 20 mM Na 2 EDTA was used as a reagent at the first stage; in the second version of SET, it was 1 M CH 3 COONH 4 . Four fractions were extracted. Fraction I included elements from the intercellular space and those connected with the outer side of the cell wall. Fraction II contained intracellular elements; fraction III contained elements firmly bound in the cell wall and associated structures; fraction IV contained insoluble residue. Adaptation of SET has shown that the first stage should be performed immediately after sampling. Separation of fractions III and IV can be neglected, since the output of isotopes into the IV fraction is at the level of error detection. The most adequate version of SET for terrestrial vascular plants is the version using 20 mM Na 2 EDTA at the first stage. Isotope 90 Sr is most sensitive to the technique changes. Its distribution depends strongly on both the extractant used at stage 1 and duration of the first stage. Distribution of artificial radionuclides in the biomass of terrestrial vascular plants can vary from year to year and depends significantly on the age of the plant. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reduction mammoplasty operative techniques for improved outcomes in the treatment of gigantomastia.
Degeorge, Brent R; Colen, David L; Mericli, Alexander F; Drake, David B
2013-01-01
Gigantomastia, or excessive breast hypertrophy, which is broadly defined as macromastia requiring a surgical reduction of more than 1500 g of breast tissue per breast, poses a unique problem to the reconstructive surgeon. Various procedures have been described for reduction mammoplasty with specific skin incisions, patterns of breast parenchymal resection, and blood supply to the nipple-areolar complex; however, not all of these techniques can be directly applied in the setting of gigantomastia. We outline a simplified method for preoperative evaluation and operative technique, which has been optimized for the management of gigantomastia. A retrospective chart review of patients who have undergone reduction mammoplasty from 2006 to 2011 by a single surgeon at the University of Virginia was performed. Patients were subdivided based on weight of breast tissue resection into 2 groups: macromastia (<1500 g resection per breast) and gigantomastia (>1500 g resection per breast). Endpoints including patient demographics, operative techniques, and complication rates were recorded. The mean resection weights in the macromastia and gigantomastia groups, respectively, were 681 g ± 283 g and 2554 g ± 421 g. There were no differences in major complications between the 2 groups. The rate of free nipple graft utilization was not significantly different between the 2 groups. Our surgical approach to gigantomastia has advantages when applied to extremely large-volume breast reduction and provides both esthetic and reproducible results. The preoperative assessment and operative techniques described herein have been adapted to the management of gigantomastia to reduce the rates of surgical complications.
'Enzyme Test Bench': A biochemical application of the multi-rate modeling
NASA Astrophysics Data System (ADS)
Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.
2008-11-01
In the expanding field of 'white biotechnology' enzymes are frequently applied to catalyze the biochemical reaction from a resource material to a valuable product. Evolutionary designed to catalyze the metabolism in any life form, they selectively accelerate complex reactions under physiological conditions. Modern techniques, such as directed evolution, have been developed to satisfy the increasing demand on enzymes. Applying these techniques together with rational protein design, we aim at improving of enzymes' activity, selectivity and stability. To tap the full potential of these techniques, it is essential to combine them with adequate screening methods. Nowadays a great number of high throughput colorimetric and fluorescent enzyme assays are applied to measure the initial enzyme activity with high throughput. However, the prediction of enzyme long term stability within short experiments is still a challenge. A new high throughput technique for enzyme characterization with specific attention to the long term stability, called 'Enzyme Test Bench', is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests conducted under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimum non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The results of the characterization, based on micro liter format experiments of hours, are in good agreement with the results of long term experiments in 1L format. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.
Advanced Software V&V for Civil Aviation and Autonomy
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2017-01-01
With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.
Improving microstructural quantification in FIB/SEM nanotomography.
Taillon, Joshua A; Pellegrinelli, Christopher; Huang, Yi-Lin; Wachsman, Eric D; Salamanca-Riba, Lourdes G
2018-01-01
FIB/SEM nanotomography (FIB-nt) is a powerful technique for the determination and quantification of the three-dimensional microstructure in subsurface features. Often times, the microstructure of a sample is the ultimate determiner of the overall performance of a system, and a detailed understanding of its properties is crucial in advancing the materials engineering of a resulting device. While the FIB-nt technique has developed significantly in the 15 years since its introduction, advanced nanotomographic analysis is still far from routine, and a number of challenges remain in data acquisition and post-processing. In this work, we present a number of techniques to improve the quality of the acquired data, together with easy-to-implement methods to obtain "advanced" microstructural quantifications. The techniques are applied to a solid oxide fuel cell cathode of interest to the electrochemistry community, but the methodologies are easily adaptable to a wide range of material systems. Finally, results from an analyzed sample are presented as a practical example of how these techniques can be implemented. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun
2015-10-01
Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.
Generation of realistic scene using illuminant estimation and mixed chromatic adaptation
NASA Astrophysics Data System (ADS)
Kim, Jae-Chul; Hong, Sang-Gi; Kim, Dong-Ho; Park, Jong-Hyun
2003-12-01
The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.
NASA Astrophysics Data System (ADS)
Drapeau, L.; Mangiarotti, S.; Le Jean, F.; Gascoin, S.; Jarlan, L.
2014-12-01
The global modeling technique provides a way to obtain ordinary differential equations from single time series1. This technique, initiated in the 1990s, could be applied successfully to numerous theoretic and experimental systems. More recently it could be applied to environmental systems2,3. Here this technique is applied to seasonal snow cover area in the Pyrenees mountain (Europe) and Mont Lebanon (Mediterranean region). The snowpack evolution is complex because it results from combination of processes driven by physiography (elevation, slope, land cover...) and meteorological variables (precipitation, temperature, wind speed...), which are highly heterogeneous in such regions. Satellite observations in visible bands offer a powerful tool to monitor snow cover areas at global scale, with large resolutions range. Although this observable does not directly inform about snow water equivalent, its dynamical behavior strongly relies on it. Therefore, snow cover area is likely to be a good proxy of the global dynamics and global modeling technique a well adapted approach. The MOD10A2 product (500m) generated from MODIS by the NASA is used after a pretreatment is applied to minimize clouds effect. The global modeling technique is then applied using two packages4,5. The analysis is performed with two time series for the whole period (2000-2012) and year by year. Low-dimensional chaotic models are obtained in many cases. Such models provide a strong argument for chaos since involving the two necessary conditions in a synthetic way: determinism and strong sensitivity to initial conditions. The models comparison suggests important non-stationnarities at interannual scale which prevent from detecting long term changes. 1: Letellier et al 2009. Frequently asked questions about global modeling, Chaos, 19, 023103. 2: Maquet et al 2007. Global models from the Canadian lynx cycles as a direct evidence for chaos in real ecosystems. J. of Mathematical Biology, 55 (1), 21-39 3: Mangiarotti et al 2014. Two chaotic global models for cereal crops cycles observed from satellite in Northern Morocco. Chaos, 24, 023130. 4 : Mangiarotti et al 2012. Polynomial search and Global modelling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205. 5: http://cran.r-project.org/web/packages/PoMoS/index.html.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Situated Agents and Humans in Social Interaction for Elderly Healthcare: From Coaalas to AVICENA.
Gómez-Sebastià, Ignasi; Moreno, Jonathan; Álvarez-Napagao, Sergio; Garcia-Gasulla, Dario; Barrué, Cristian; Cortés, Ulises
2016-02-01
Assistive Technologies (AT) are an application area where several Artificial Intelligence techniques and tools have been successfully applied to support elderly or impeded people on their daily activities. However, approaches to AT tend to center in the user-tool interaction, neglecting the user's connection with its social environment (such as caretakers, relatives and health professionals) and the possibility to monitor undesired behaviour providing both adaptation to a dynamic environment and early response to potentially dangerous situations. In previous work we have presented COAALAS, an intelligent social and norm-aware device for elderly people that is able to autonomously organize, reorganize and interact with the different actors involved in elderly-care, either human actors or other devices. In this paper we put our work into context, by first examining what are the desirable properties of such a system, analysing the state-of-the-art on the relevant topics, and verifying the validity of our proposal in a larger context that we call AVICENA. AVICENA's aim is develop a semi-autonomous (collaborative) tool to promote monitored, intensive, extended and personalized therapeutic regime adherence at home based on adaptation techniques.
NASA Astrophysics Data System (ADS)
Bellier, Joseph; Bontron, Guillaume; Zin, Isabella
2017-12-01
Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Shepherd, M.; Sides, S. C.
2018-04-01
We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.
NASA Astrophysics Data System (ADS)
DeBardelaben, James A.; Miller, Jeremy K.; Myrick, Wilbur L.; Miller, Joel B.; Gilbreath, G. Charmaine; Bajramaj, Blerta
2012-06-01
Nuclear quadrupole resonance (NQR) is a radio frequency (RF) magnetic spectroscopic technique that has been shown to detect and identify a wide range of explosive materials containing quadrupolar nuclei. The NQR response signal provides a unique signature of the material of interest. The signal is, however, very weak and can be masked by non-stationary RF interference (RFI) and thermal noise, limiting detection distance. In this paper, we investigate the bounds on the NQR detection range for ammonium nitrate. We leverage a low-cost RFI data acquisition system composed of inexpensive B-field sensing and commercial-off-the-shelf (COTS) software-defined radios (SDR). Using collected data as RFI reference signals, we apply adaptive filtering algorithms to mitigate RFI and enable NQR detection techniques to approach theoretical range bounds in tactical environments.
Use of PZT's for adaptive control of Fabry-Perot etalon plate figure
NASA Technical Reports Server (NTRS)
Skinner, WIlbert; Niciejewski, R.
2005-01-01
A Fabry Perot etalon, consisting of two spaced and reflective glass flats, provides the mechanism by which high resolution spectroscopy may be performed over narrow spectral regions. Space based applications include direct measurements of Doppler shifts of airglow absorption and emission features and the Doppler broadening of spectral lines. The technique requires a high degree of parallelism between the two flats to be maintained through harsh launch conditions. Monitoring and adjusting the plate figure by illuminating the Fabry Perot interferometer with a suitable monochromatic source may be performed on orbit to actively control of the parallelism of the flats. This report describes the use of such a technique in a laboratory environment applied to a piezo-electric stack attached to the center of a Fabry Perot etalon.
Olivier Chesneau's Work on Low Mass Stars
NASA Astrophysics Data System (ADS)
Lagadec, E.
2015-12-01
During his too short career, Olivier Chesneau pioneered the study of the circumstellar environments of low mass evolved stars using very high angular resolution techniques. He applied state of the art high angular resolution techniques, such as optical interferometry and adaptive optics imaging, to the the study of a variety of objects, from AGB stars to Planetary Nebulae, via e.g. Born Again stars, RCB stars and Novae. I present here an overview of this work and most important results by focusing on the paths he followed and key encounters he made to reach these results. Olivier liked to work in teams and was very strong at linking people with complementary expertises to whom he would communicate his enthusiasm and sharp ideas. His legacy will live on through the many people he inspired.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach
NASA Astrophysics Data System (ADS)
Katkovnik, Vladimir; Bioucas-Dias, José
2010-04-01
Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Sun, JinWei; Rolfe, Peter
2010-12-01
Near-infrared spectroscopy (NIRS) can be used as the basis of non-invasive neuroimaging that may allow the measurement of haemodynamic changes in the human brain evoked by applied stimuli. Since this technique is very sensitive, physiological interference arising from the cardiac cycle and breathing can significantly affect the signal quality. Such interference is difficult to remove by conventional techniques because it occurs not only in the extracerebral layer but also in the brain tissue itself. Previous work on this problem employing temporal filtering, spatial filtering, and adaptive filtering have exhibited good performance for recovering brain activity data in evoked response studies. However, in this study, we present a time-frequency adaptive method for physiological interference reduction based on the combination of empirical mode decomposition (EMD) and Hilbert spectral analysis (HSA). Monte Carlo simulations based on a five-layered slab model of a human adult head were implemented to evaluate our methodology. We applied an EMD algorithm to decompose the NIRS time series derived from Monte Carlo simulations into a series of intrinsic mode functions (IMFs). In order to identify the IMFs associated with symmetric interference, the extracted components were then Hilbert transformed from which the instantaneous frequencies could be acquired. By reconstructing the NIRS signal by properly selecting IMFs, we determined that the evoked brain response is effectively filtered out with even higher signal-to-noise ratio (SNR). The results obtained demonstrated that EMD, combined with HSA, can effectively separate, identify and remove the contamination from the evoked brain response obtained with NIRS using a simple single source-detector pair.
Product Recommendation System Based on Personal Preference Model Using CAM
NASA Astrophysics Data System (ADS)
Murakami, Tomoko; Yoshioka, Nobukazu; Orihara, Ryohei; Furukawa, Koichi
Product recommendation system is realized by applying business rules acquired by data maining techniques. Business rules such as demographical patterns of purchase, are able to cover the groups of users that have a tendency to purchase products, but it is difficult to recommend products adaptive to various personal preferences only by utilizing them. In addition to that, it is very costly to gather the large volume of high quality survey data, which is necessary for good recommendation based on personal preference model. A method collecting kansei information automatically without questionnaire survey is required. The constructing personal preference model from less favor data is also necessary, since it is costly for the user to input favor data. In this paper, we propose product recommendation system based on kansei information extracted by text mining and user's preference model constructed by Category-guided Adaptive Modeling, CAM for short. CAM is a feature construction method that can generate new features constructing the space where same labeled examples are close and different labeled examples are far away from some labeled examples. It is possible to construct personal preference model by CAM despite less information of likes and dislikes categories. In the system, retrieval agent gathers the products' specification and user agent manages preference model, user's likes and dislikes. Kansei information of the products is gained by applying text mining technique to the reputation documents about the products on the web site. We carry out some experimental studies to make sure that prefrence model obtained by our method performs effectively.
Swarm Intelligence: New Techniques for Adaptive Systems to Provide Learning Support
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Looi, Chee-Kit
2012-01-01
The notion of a system adapting itself to provide support for learning has always been an important issue of research for technology-enabled learning. One approach to provide adaptivity is to use social navigation approaches and techniques which involve analysing data of what was previously selected by a cluster of users or what worked for…
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
NASA Astrophysics Data System (ADS)
Kolyakov, Sergei; Afanasyeva, Natalia; Bruch, Reinhard; Afanasyeva, Natalia
1998-05-01
The new method of fiber optical evanescent wave Fourier transform infrared (FEW-FTIR) spectroscopy has been applied to the diagnostics of normal skin tissue, as well as precancerous and cancerous conditions. The FEW-FTIR technique is nondestructive and sensitive to changes of vibrational spectra in the IR region, without heating and damaging human and animal skin tissue. Therefore this method and technique is an ideal diagnostic tool for tumor and cancer characterization at an early stage of development on a molecular level. The application of fiber optic technology in the middle infrared (MIR) region is relatively inexpensive and can be adapted easily to any commercially available tabletop FTIR spectrometers. This method of diagnostics is fast (several seconds), and can be applied to many fields. Noninvasive medical diagnostics of skin cancer and other skin diseases in vivo, ex vivo, and in vitro allow for the development of convenient, remote clinical applications in dermatology and related fields. The spectral variations from normal to pathological skin tissue and environmental influence on skin have been measured.
Park, Hee-Won; In, Gyo; Kim, Jeong-Han; Cho, Byung-Goo; Han, Gyeong-Ho; Chang, Il-Moo
2013-01-01
Discriminating between two herbal medicines (Panax ginseng and Panax quinquefolius), with similar chemical and physical properties but different therapeutic effects, is a very serious and difficult problem. Differentiation between two processed ginseng genera is even more difficult because the characteristics of their appearance are very similar. An ultraperformance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOF MS)-based metabolomic technique was applied for the metabolite profiling of 40 processed P. ginseng and processed P. quinquefolius. Currently known biomarkers such as ginsenoside Rf and F11 have been used for the analysis using the UPLC-photodiode array detector. However, this method was not able to fully discriminate between the two processed ginseng genera. Thus, an optimized UPLC-QTOF-based metabolic profiling method was adapted for the analysis and evaluation of two processed ginseng genera. As a result, all known biomarkers were identified by the proposed metabolomics, and additional potential biomarkers were extracted from the huge amounts of global analysis data. Therefore, it is expected that such metabolomics techniques would be widely applied to the ginseng research field. PMID:24558312
Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie
2014-02-01
Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.
Detection of antipersonnel (AP) mines using mechatronics approach
NASA Astrophysics Data System (ADS)
Shahri, Ali M.; Naghdy, Fazel
1998-09-01
At present there are approximately 110 million land-mines scattered around the world in 64 countries. The clearance of these mines takes place manually. Unfortunately, on average for every 5000 mines cleared one mine clearer is killed. A Mine Detector Arm (MDA) using mechatronics approach is under development in this work. The robot arm imitates manual hand- prodding technique for mine detection. It inserts a bayonet into the soil and models the dynamics of the manipulator and environment parameters, such as stiffness variation in the soil to control the impact caused by contacting a stiff object. An explicit impact control scheme is applied as the main control scheme, while two different intelligent control methods are designed to deal with uncertainties and varying environmental parameters. Firstly, a neuro-fuzzy adaptive gain controller (NFAGC) is designed to adapt the force gain control according to the estimated environment stiffness. Then, an adaptive neuro-fuzzy plus PID controller is employed to switch from a conventional PID controller to neuro-fuzzy impact control (NFIC), when an impact is detected. The developed control schemes are validated through computer simulation and experimental work.
Transfrontal orbitotomy in the dog: an adaptable three-step approach to the orbit.
Håkansson, Nils Wallin; Håkansson, Berit Wallin
2010-11-01
To describe an adaptable and extensive method for orbitotomy in the dog. An adaptable three-step technique for orbitotomy was developed and applied in nine consecutive cases. The steps are zygomatic arch resection laterally, temporalis muscle elevation medially and zygomatic process osteotomy anteriorly-dorsally. The entire orbit is accessed with excellent exposure and room for surgical manipulation. Facial nerve, lacrimal nerve and lacrimal gland function are preserved. The procedure can easily be converted into an orbital exenteration. Exposure of the orbit was excellent in all cases and anatomically correct closure was achieved. Signs of postoperative discomfort were limited, with moderate, reversible swelling in two cases and mild in seven. Wound infection or emphysema did not occur, nor did any other complication attributable to the operative procedure. Blinking ability and lacrimal function were preserved over follow-up times ranging from 1 to 4 years. Transfrontal orbitotomy in the dog offers excellent exposure and room for manipulation. Anatomically correct closure is easily accomplished, postoperative discomfort is limited and complications are mild and temporary. © 2010 American College of Veterinary Ophthalmologists.
Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi
2017-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349
Jaeger, Daniel; Pilger, Christian; Hachmeister, Henning; Oberländer, Elina; Wördenweber, Robin; Wichmann, Julian; Mussgnug, Jan H; Huser, Thomas; Kruse, Olaf
2016-10-21
Oleaginous photosynthetic microalgae hold great promise as non-food feedstocks for the sustainable production of bio-commodities. The algal lipid quality can be analysed by Raman micro-spectroscopy, and the lipid content can be imaged in vivo in a label-free and non-destructive manner by coherent anti-Stokes Raman scattering (CARS) microscopy. In this study, both techniques were applied to the oleaginous microalga Monoraphidium neglectum, a biotechnologically promising microalga resistant to commonly applied lipid staining techniques. The lipid-specific CARS signal was successfully separated from the interfering two-photon excited fluorescence of chlorophyll and for the first time, lipid droplet formation during nitrogen starvation could directly be analysed. We found that the neutral lipid content deduced from CARS image analysis strongly correlated with the neutral lipid content measured gravimetrically and furthermore, that the relative degree of unsaturation of fatty acids stored in lipid droplets remained similar. Interestingly, the lipid profile during cellular adaption to nitrogen starvation showed a two-phase characteristic with initially fatty acid recycling and subsequent de novo lipid synthesis. This works demonstrates the potential of quantitative CARS microscopy as a label-free lipid analysis technique for any microalgal species, which is highly relevant for future biotechnological applications and to elucidate the process of microalgal lipid accumulation.
Acute and chronic neuromuscular adaptations to local vibration training.
Souron, Robin; Besson, Thibault; Millet, Guillaume Y; Lapole, Thomas
2017-10-01
Vibratory stimuli are thought to have the potential to promote neural and/or muscular (re)conditioning. This has been well described for whole-body vibration (WBV), which is commonly used as a training method to improve strength and/or functional abilities. Yet, this technique may present some limitations, especially in clinical settings where patients are unable to maintain an active position during the vibration exposure. Thus, a local vibration (LV) technique, which consists of applying portable vibrators directly over the tendon or muscle belly without active contribution from the participant, may present an alternative to WBV. The purpose of this narrative review is (1) to provide a comprehensive overview of the literature related to the acute and chronic neuromuscular changes associated with LV, and (2) to show that LV training may be an innovative and efficient alternative method to the 'classic' training programs, including in the context of muscle deconditioning prevention or rehabilitation. An acute LV application (one bout of 20-60 min) may be considered as a significant neuromuscular workload, as demonstrated by an impairment of force generating capacity and LV-induced neural changes. Accordingly, it has been reported that a training period of LV is efficient in improving muscular performance over a wide range of training (duration, number of session) and vibration (frequency, amplitude, site of application) parameters. The functional improvements are principally triggered by adaptations within the central nervous system. A model illustrating the current research on LV-induced adaptations is provided.
Beam shaping as an enabler for new applications
NASA Astrophysics Data System (ADS)
Guertler, Yvonne; Kahmann, Max; Havrilla, David
2017-02-01
For many years, laser beam shaping has enabled users to achieve optimized process results as well as manage challenging applications. The latest advancements in industrial lasers and processing optics have taken this a step further as users are able to adapt the beam shape to meet specific application requirements in a very flexible way. TRUMPF has developed a wide range of experience in creating beam profiles at the work piece for optimized material processing. This technology is based on the physical model of wave optics and can be used with ultra short pulse lasers as well as multi-kW cw lasers. Basically, the beam shape can be adapted in all three dimensions in space, which allows maximum flexibility. Besides adaption of intensity profile, even multi-spot geometries can be produced. This approach is very cost efficient, because a standard laser source and (in the case of cw lasers) a standard fiber can be used without any special modifications. Based on this innovative beam shaping technology, TRUMPF has developed new and optimized processes. Two of the most recent application developments using these techniques are cutting glass and synthetic sapphire with ultra-short pulse lasers and enhanced brazing of hot dip zinc coated steel for automotive applications. Both developments lead to more efficient and flexible production processes, enabled by laser technology and open the door to new opportunities. They also indicate the potential of beam shaping techniques since they can be applied to both single-mode laser sources (TOP Cleave) and multi-mode laser sources (brazing).
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
SU-F-J-197: A Novel Intra-Beam Range Detection and Adaptation Strategy for Particle Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Shao, Y
2016-06-15
Purpose: In-vivo range detection/verification is crucial in particle therapy for effective and safe delivery. The state-of-art techniques are not sufficient for in-vivo on-line range verification due to conflicts among patient dose, signal statistics and imaging time. We propose a novel intra-beam range detection and adaptation strategy for particle therapy. Methods: This strategy uses the planned mid-range spots as probing beams without adding extra radiation to patients. Such choice of probing beams ensures the Bragg peaks to remain inside the tumor even with significant range variation from the plan. It offers sufficient signal statistics for in-beam positron emission tomography (PET) duemore » to high positron activity of therapeutic dose. The probing beam signal can be acquired and reconstructed using in-beam PET that allows for delineation of the Bragg peaks and detection of range shift with ease of detection enabled by single-layered spots. If the detected range shift is within a pre-defined tolerance, the remaining spots will be delivered as the original plan. Otherwise, a fast re-optimization using range-shifted beamlets and accounting for the probing beam dose is applied to consider the tradeoffs posed by the online anatomy. Simulated planning and delivery studies were used to demonstrate the effectiveness of the proposed techniques. Results: Simulations with online range variations due to shifts of various foreign objects into the beam path showed successful delineation of the Bragg peaks as a result of delivering probing beams. Without on-line delivery adaptation, dose distribution was significantly distorted. In contrast, delivery adaptation incorporating detected range shift recovered well the planned dose. Conclusion: The proposed intra-beam range detection and adaptation utilizing the planned mid-range spots as probing beams, which illuminate the beam range with strong and accurate PET signals, is a safe, practical, yet effective approach to address range uncertainty issues in particle therapy.« less
Finishing of additively manufactured titanium alloy by shape adaptive grinding (SAG)
NASA Astrophysics Data System (ADS)
Beaucamp, Anthony T.; Namba, Yoshiharu; Charlton, Phillip; Jain, Samyak; Graziano, Arthur A.
2015-06-01
In recent years, rapid prototyping of titanium alloy components for medical and aeronautics application has become viable thanks to advances in technologies such as electron beam melting (EBM) and selective laser sintering (SLS). However, for many applications the high surface roughness generated by additive manufacturing techniques demands a post-finishing operation to improve the surface quality prior to usage. In this paper, the novel shape adaptive grinding process has been applied to finishing titanium alloy (Ti6Al4V) additively manufactured by EBM and SLS. It is shown that the micro-structured surface layer resulting from the melting process can be removed, and the surface can then be smoothed down to less than 10 nm Ra (starting from 4-5 μm Ra) using only three different diamond grit sizes. This paper also demonstrates application of the technology to freeform shapes, and documents the dimensional accuracy of finished artifacts.
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Researching a local heroin market as a complex adaptive system.
Hoffer, Lee D; Bobashev, Georgiy; Morris, Robert J
2009-12-01
This project applies agent-based modeling (ABM) techniques to better understand the operation, organization, and structure of a local heroin market. The simulation detailed was developed using data from an 18-month ethnographic case study. The original research, collected in Denver, CO during the 1990s, represents the historic account of users and dealers who operated in the Larimer area heroin market. Working together, the authors studied the behaviors of customers, private dealers, street-sellers, brokers, and the police, reflecting the core elements pertaining to how the market operated. After evaluating the logical consistency between the data and agent behaviors, simulations scaled-up interactions to observe their aggregated outcomes. While the concept and findings from this study remain experimental, these methods represent a novel way in which to understand illicit drug markets and the dynamic adaptations and outcomes they generate. Extensions of this research perspective, as well as its strengths and limitations, are discussed.
Discussion of the enabling environments for decentralised water systems.
Moglia, M; Alexander, K S; Sharma, A
2011-01-01
Decentralised water supply systems are becoming increasingly affordable and commonplace in Australia and have the potential to alleviate urban water shortages and reduce pollution into natural receiving marine and freshwater streams. Learning processes are necessary to support the efficient implementation of decentralised systems. These processes reveal the complex socio-technical and institutional factors to be considered when developing an enabling environment supporting decentralised water and wastewater servicing solutions. Critical to the technological transition towards established decentralised systems is the ability to create strategic and adaptive capacity to promote learning and dialogue. Learning processes require institutional mechanisms to ensure the lessons are incorporated into the formulation of policy and regulation, through constructive involvement of key government institutions. Engagement of stakeholders is essential to the enabling environment. Collaborative learning environments using systems analysis with communities (social learning) and adaptive management techniques are useful in refining and applying scientists' and managers' knowledge (knowledge management).
Rendezvous with connectivity preservation for multi-robot systems with an unknown leader
NASA Astrophysics Data System (ADS)
Dong, Yi
2018-02-01
This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.
Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il
2015-08-01
Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Intelligent data management for real-time spacecraft monitoring
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M.; Gasser, Les; Abramson, Bruce
1992-01-01
Real-time AI systems have begun to address the challenge of restructuring problem solving to meet real-time constraints by making key trade-offs that pursue less than optimal strategies with minimal impact on system goals. Several approaches for adapting to dynamic changes in system operating conditions are known. However, simultaneously adapting system decision criteria in a principled way has been difficult. Towards this end, a general technique for dynamically making such trade-offs using a combination of decision theory and domain knowledge has been developed. Multi-attribute utility theory (MAUT), a decision theoretic approach for making one-time decisions is discussed and dynamic trade-off evaluation is described as a knowledge-based extension of MAUT that is suitable for highly dynamic real-time environments, and provides an example of dynamic trade-off evaluation applied to a specific data management trade-off in a real-world spacecraft monitoring application.
NASA Technical Reports Server (NTRS)
Pazdera, J. S.
1972-01-01
To brake in minimum distance, the tire slip must be controlled to ride the peak of the mu-slip curve so that maximum ground force is developed between tire and pavement. The resulting control system differs from antiskid systems which react to impending wheel lockup. A simplified model is presented to permit development of a sound control strategy. Liapunov techniques are used to derive a peak riding adaptive controller applicable to each wheel of a breaking vehicle. The controller is applied to a more sophisticated model of a braking airplane with strut bending dynamics included. Simulation results verify the peak riding property of the controller and the rapid adaption of the controller to extreme runway conditions. The effect of actuator dynamics, perturbation frequency, type and location of sensors, absence of a free wheel, and a method in which the pilot's braking commands can be interfaced with the peak riding system are also considered.
My Solar System: A Developmentally Adapted Eco-Mapping Technique for Children
ERIC Educational Resources Information Center
Curry, Jennifer R.; Fazio-Griffith, Laura J.; Rohr, Shannon N.
2008-01-01
Counseling children requires specific skills and techniques, such as play therapy and expressive arts, to address developmental manifestations and to facilitate the understanding of presenting problems. This article outlines an adapted eco-mapping activity that can be used as a creative counseling technique with children in order to promote…
Multibeam satellite EIRP adaptability for aeronautical communications.
NASA Technical Reports Server (NTRS)
Kinal, G. V.; Bisaga, J. J.
1973-01-01
EIRP enhancement and management techniques, emphasizing aeronautical communications and adaptable multibeam concepts, are classified and characterized. User requirement and demand characteristics that exploit the improvement available from each technique are identified, and the relative performance improvement of each is discussed. It is concluded that aeronautical satellite communications could benefit greatly by the employment of these techniques.
Flight control with adaptive critic neural network
NASA Astrophysics Data System (ADS)
Han, Dongchen
2001-10-01
In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1989-01-01
An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.
The research and development of the adaptive optics in ophthalmology
NASA Astrophysics Data System (ADS)
Wu, Chuhan; Zhang, Xiaofang; Chen, Weilin
2015-08-01
Recently the combination of adaptive optics and ophthalmology has made great progress and become highly effective. The retina disease is diagnosed by retina imaging technique based on scanning optical system, so the scanning of eye requires optical system characterized by great ability of anti-moving and optical aberration correction. The adaptive optics possesses high level of adaptability and is available for real time imaging, which meets the requirement of medical retina detection with accurate images. Now the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are widely used, which are the core techniques in the area of medical retina detection. Based on the above techniques, in China, a few adaptive optics systems used for eye medical scanning have been designed by some researchers from The Institute of Optics And Electronics of CAS(The Chinese Academy of Sciences); some foreign research institutions have adopted other methods to eliminate the interference of eye moving and optical aberration; there are many relevant patents at home and abroad. In this paper, the principles and relevant technique details of the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are described. And the recent development and progress of adaptive optics in the field of eye retina imaging are analyzed and summarized.
Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavel Kus; Pavel Solin; David Andrs
2014-11-01
In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Adaptive control applied to Space Station attitude control system
NASA Technical Reports Server (NTRS)
Lam, Quang M.; Chipman, Richard; Hu, Tsay-Hsin G.; Holmes, Eric B.; Sunkel, John
1992-01-01
This paper presents an adaptive control approach to enhance the performance of current attitude control system used by the Space Station Freedom. The proposed control law was developed based on the direct adaptive control or model reference adaptive control scheme. Performance comparisons, subject to inertia variation, of the adaptive controller and the fixed-gain linear quadratic regulator currently implemented for the Space Station are conducted. Both the fixed-gain and the adaptive gain controllers are able to maintain the Station stability for inertia variations of up to 35 percent. However, when a 50 percent inertia variation is applied to the Station, only the adaptive controller is able to maintain the Station attitude.
Lang, Jonas W B; Bliese, Paul D
2009-03-01
The present research provides new insights into the relationship between general mental ability (GMA) and adaptive performance by applying a discontinuous growth modeling framework to a study of unforeseen change on a complex decision-making task. The proposed framework provides a way to distinguish 2 types of adaptation (transition adaptation and reacquisition adaptation) from 2 common performance components (skill acquisition and basal task performance). Transition adaptation refers to an immediate loss of performance following a change, whereas reacquisition adaptation refers to the ability to relearn a changed task over time. Analyses revealed that GMA was negatively related to transition adaptation and found no evidence for a relationship between GMA and reacquisition adaptation. The results are integrated within the context of adaptability research, and implications of using the described discontinuous growth modeling framework to study adaptability are discussed. (c) 2009 APA, all rights reserved.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
Simulation of violent free surface flow by AMR method
NASA Astrophysics Data System (ADS)
Hu, Changhong; Liu, Cheng
2018-05-01
A novel CFD approach based on adaptive mesh refinement (AMR) technique is being developed for numerical simulation of violent free surface flows. CIP method is applied to the flow solver and tangent of hyperbola for interface capturing with slope weighting (THINC/SW) scheme is implemented as the free surface capturing scheme. The PETSc library is adopted to solve the linear system. The linear solver is redesigned and modified to satisfy the requirement of the AMR mesh topology. In this paper, our CFD method is outlined and newly obtained results on numerical simulation of violent free surface flows are presented.
Conflicts in developing countries: a case study from Rio de Janeiro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bredariol, Celso Simoes; Magrini, Alessandra
In developing countries, environmental conflicts are resolved mainly in the political arena. In the developed nations, approaches favoring structured negotiation support techniques are more common, with methodologies and studies designed especially for this purpose, deriving from Group Communications and Decision Theory. This paper analyzes an environmental dispute in the City of Rio de Janeiro, applying conflict analysis methods and simulating its settlement. It concludes that the use of these methodologies in the developing countries may be undertaken with adaptations, designed to train community groups in negotiating while fostering the democratization of the settlement of these disputes.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Feldman, Mark
1995-01-01
Experimental studies were performed to determine the effects of stress and physical aging on the matrix dominated time dependent properties of IM7/8320 composite. Isothermal tensile creep/aging test techniques developed for polymers were adapted for testing of the composite material. Time dependent transverse and shear compliance's for an orthotropic plate were found from short term creep compliance measurements at constant, sub-T(8) temperatures. These compliance terms were shown to be affected by physical aging. Aging time shift factors and shift rates were found to be a function of temperature and applied stress.
The effects of stress and physical aging on the creep compliance of a polymeric composite
NASA Technical Reports Server (NTRS)
Gates, Thomas E.; Feldman, Mark
1993-01-01
An experimental study was performed to determine the effects of stress and physical aging on the matrix dominated viscoelastic properties of IM7/8320, a high temperature fiber reinforced thermoplastic composite. Established creep/aging test techniques developed for polymers were adapted for testing of the composite material. The transverse and shear compliance for an orthotropic plate were found from creep compliance measurements at constant, sub-Tg temperatures. These compliance terms were shown to be effected by physical aging. Aging time shift factors and shift rates were found to be a function of applied stress.
NASA Technical Reports Server (NTRS)
1991-01-01
A collaboration between NASA Lewis Research Center (LRC) and Gladwin Engineering resulted in the adaptation of aerospace high temperature metal technology to the continuous casting of steel. The continuous process is more efficient because it takes less time and labor. A high temperature material, once used on the X-15 research plane, was applied to metal rollers by a LRC developed spraying technique. Lewis Research Center also supplied mold prototype of metal composites, reducing erosion and promoting thermal conductivity. Rollers that previously cracked due to thermal fatigue, lasted longer. Gladwin's sales have increased, and additional NASA-developed innovations are anticipated.
Formation of supermassive black holes through fragmentation of torodial supermassive stars.
Zink, Burkhard; Stergioulas, Nikolaos; Hawke, Ian; Ott, Christian D; Schnetter, Erik; Müller, Ewald
2006-04-28
We investigate new paths to supermassive black hole formation by considering the general relativistic evolution of a differentially rotating polytrope with a toroidal shape. We find that this polytrope is unstable to nonaxisymmetric modes, which leads to a fragmentation into self-gravitating, collapsing components. In the case of one such fragment, we apply a simplified adaptive mesh refinement technique to follow the evolution to the formation of an apparent horizon centered on the fragment. This is the first study of the onset of nonaxisymmetric dynamical instabilities of supermassive stars in full general relativity.
Amino acids as a source of organic nitrogen in Antarctic endolithic microbial communities
NASA Technical Reports Server (NTRS)
McDonald, G.; Sun, H.
2002-01-01
In the Antarctic Dry Valleys, cryptoendolithic microbial communities occur within porous sandstone rocks. Current understanding of the mechanisms of physiological adaptation of these communities to the harsh Antarctic environment is limited, because traditional methods of studying microbial physiology are very difficult to apply to organisms with extremely low levels of metabolic activity. In order to fully understand carbon and nitrogen cycling and nutrient uptake in cryptoendolithic communities, and the metabolic costs that the organisms incur in order to survive, it is necessary to employ molecular geochemical techniques such as amino acid analysis in addition to physiological methods.
B-Spline Filtering for Automatic Detection of Calcification Lesions in Mammograms
NASA Astrophysics Data System (ADS)
Bueno, G.; Sánchez, S.; Ruiz, M.
2006-10-01
Breast cancer continues to be an important health problem between women population. Early detection is the only way to improve breast cancer prognosis and significantly reduce women mortality. It is by using CAD systems that radiologist can improve their ability to detect, and classify lesions in mammograms. In this study the usefulness of using B-spline based on a gradient scheme and compared to wavelet and adaptative filtering has been investigated for calcification lesion detection and as part of CAD systems. The technique has been applied to different density tissues. A qualitative validation shows the success of the method.
Building Science-Relevant Literacy with Technical Writing in High School
DOE Office of Scientific and Technical Information (OSTI.GOV)
Girill, T R
2006-06-02
By drawing on the in-class work of an on-going literacy outreach project, this paper explains how well-chosen technical writing activities can earn time in high-school science courses by enabling underperforming students (including ESL students) to learn science more effectively. We adapted basic research-based text-design and usability techniques into age-appropriate exercises and cases using the cognitive apprenticeship approach. This enabled high-school students, aided by explicit guidelines, to build their cognitive maturity, learn how to craft good instructions and descriptions, and apply those skills to better note taking and technical talks in their science classes.
A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health
Zafra-Cabeza, Ascensión; Rivera, Daniel E.; Collins, Linda M.; Ridao, Miguel A.; Camacho, Eduardo F.
2010-01-01
This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450
Elucidating Microbial Adaptation Dynamics via Autonomous Exposure and Sampling
NASA Technical Reports Server (NTRS)
Grace, Joseph M.; Verseux, Cyprien; Gentry, Diana; Moffet, Amy; Thayabaran, Ramanen; Wong, Nathan; Rothschild, Lynn
2013-01-01
The adaptation of micro-organisms to their environments is a complex process of interaction between the pressures of the environment and of competition. Reducing this multifactorial process to environmental exposure in the laboratory is a common tool for elucidating individual mechanisms of evolution, such as mutation rates. Although such studies inform fundamental questions about the way adaptation and even speciation occur, they are often limited by labor-intensive manual techniques. Current methods for controlled study of microbial adaptation limit the length of time, the depth of collected data, and the breadth of applied environmental conditions. Small idiosyncrasies in manual techniques can have large effects on outcomes; for example, there are significant variations in induced radiation resistances following similar repeated exposure protocols. We describe here a project under development to allow rapid cycling of multiple types of microbial environmental exposure. The system allows continuous autonomous monitoring and data collection of both single species and sampled communities, independently and concurrently providing multiple types of controlled environmental pressure (temperature, radiation, chemical presence or absence, and so on) to a microbial community in dynamic response to the ecosystem's current status. When combined with DNA sequencing and extraction, such a controlled environment can cast light on microbial functional development, population dynamics, inter- and intra-species competition, and microbe-environment interaction. The project's goal is to allow rapid, repeatable iteration of studies of both natural and artificial microbial adaptation. As an example, the same system can be used both to increase the pH of a wet soil aliquot over time while periodically sampling it for genetic activity analysis, or to repeatedly expose a culture of bacteria to the presence of a toxic metal, automatically adjusting the level of toxicity based on the number or growth rate of surviving cells. We are on our second prototype iteration, with demonstrated functions of microbial growth monitoring and dynamic exposure to UV-C radiation and temperature. We plan to add functionality for general chemical presence or absence by Nov. 2013. By making the project low-cost and open-source, we hope to encourage others to use it as a basis for future development of a common microbial environmental adaptation testbed.
Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure
NASA Astrophysics Data System (ADS)
Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.
2014-08-01
Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.
FURAU, Cristian; FURAU, Gheorghe; DASCAU, Voicu; CIOBANU, Gheorghe; ONEL, Cristina; STANESCU, Casiana
2013-01-01
ABSTRACT Objectives: Cesarean section has become recently the first choice for delivery in many clinics in Romania and worldwide. The purpose of our study is to assess the benefits of introducing the adapted Vejnovic uterine suture technique into daily practice. Material and Methods: A total of 1703 out of the 1776 cesarean section performed in the period January, 2012 - March, 2013 in the Obstetric Department of the Emergency Clinical County Hospital of Arad were retrospectively analyzed based on the cesarean section registries, birth registries and patient's personal medical records. We compared results between the group of patients undergoing adapted Vejnovic cesarean section technique and the group of patients operated in a classic manner. Outcomes: The cesarean section rate in the studied period was 56.48%. Adapted Vejnovic cesarean section technique was performed in 548 cases (30.86% of the cases), furthermore in the last 3 months studied it reached 57.27%. Mean APGAR score was better in the adapted Vejnovic cesarean section group (8.43) compared with the reference group (8.34). No significant differences were seen between the two groups regarding maternal age, gestation, weeks of gestation, newborn weight, anesthesia and indications for cesarean section. Exteriorizing the uterus helped the incidental diagnosis of 35 uterine myoma, 22 adnexal masses and 13 uterine malformations. Conclusion: In a society with a constant growth of cesarean rate, the adapted Vejnovic cesarean section technique is becoming popular amongst clinicians for its advantages, but further studies need to be developed for its standardization. PMID:24371494
Adaptive management of rangeland systems
Allen, Craig R.; Angeler, David G.; Fontaine, Joseph J.; Garmestani, Ahjond S.; Hart, Noelle M.; Pope, Kevin L.; Twidwell, Dirac
2017-01-01
Adaptive management is an approach to natural resource management that uses structured learning to reduce uncertainties for the improvement of management over time. The origins of adaptive management are linked to ideas of resilience theory and complex systems. Rangeland management is particularly well suited for the application of adaptive management, having sufficient controllability and reducible uncertainties. Adaptive management applies the tools of structured decision making and requires monitoring, evaluation, and adjustment of management. Adaptive governance, involving sharing of power and knowledge among relevant stakeholders, is often required to address conflict situations. Natural resource laws and regulations can present a barrier to adaptive management when requirements for legal certainty are met with environmental uncertainty. However, adaptive management is possible, as illustrated by two cases presented in this chapter. Despite challenges and limitations, when applied appropriately adaptive management leads to improved management through structured learning, and rangeland management is an area in which adaptive management shows promise and should be further explored.
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Investigation of finite element: ABC methods for electromagnetic field simulation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chatterjee, A.; Volakis, John L.; Nguyen, J.
1994-01-01
The mechanics of wave propagation in the presence of obstacles is of great interest in many branches of engineering and applied mathematics like electromagnetics, fluid dynamics, geophysics, seismology, etc. Such problems can be broadly classified into two categories: the bounded domain or the closed problem and the unbounded domain or the open problem. Analytical techniques have been derived for the simpler problems; however, the need to model complicated geometrical features, complex material coatings and fillings, and to adapt the model to changing design parameters have inevitably tilted the balance in favor of numerical techniques. The modeling of closed problems presents difficulties primarily in proper meshing of the interior region. However, problems in unbounded domains pose a unique challenge to computation, since the exterior region is inappropriate for direct implementation of numerical techniques. A large number of solutions have been proposed but only a few have stood the test of time and experiment. The goal of this thesis is to develop an efficient and reliable partial differential equation technique to model large three dimensional scattering problems in electromagnetics.
Nonlinear secret image sharing scheme.
Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.
Nonlinear Secret Image Sharing Scheme
Shin, Sang-Ho; Yoo, Kee-Young
2014-01-01
Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2m⌉ bit-per-pixel (bpp), respectively. PMID:25140334
Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.
2011-01-01
We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788
Application of Taguchi methods to infrared window design
NASA Astrophysics Data System (ADS)
Osmer, Kurt A.; Pruszynski, Charles J.
1990-10-01
Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.
NASA Astrophysics Data System (ADS)
Aonishi, Toru; Mimura, Kazushi; Utsunomiya, Shoko; Okada, Masato; Yamamoto, Yoshihisa
2017-10-01
The coherent Ising machine (CIM) has attracted attention as one of the most effective Ising computing architectures for solving large scale optimization problems because of its scalability and high-speed computational ability. However, it is difficult to implement the Ising computation in the CIM because the theories and techniques of classical thermodynamic equilibrium Ising spin systems cannot be directly applied to the CIM. This means we have to adapt these theories and techniques to the CIM. Here we focus on a ferromagnetic model and a finite loading Hopfield model, which are canonical models sharing a common mathematical structure with almost all other Ising models. We derive macroscopic equations to capture nonequilibrium phase transitions in these models. The statistical mechanical methods developed here constitute a basis for constructing evaluation methods for other Ising computation models.
Ar+ and CuBr laser-assisted chemical bleaching of teeth: estimation of whiteness degree
NASA Astrophysics Data System (ADS)
Dimitrov, S.; Todorovska, Roumyana; Gizbrecht, Alexander I.; Raychev, L.; Petrov, Lyubomir P.
2003-11-01
In this work the results of adaptation of impartial methods for color determination aimed at developing of techniques for estimation of human teeth whiteness degree, sufficiently handy for common use in clinical practice are presented. For approbation and by the way of illustration of the techniques, standards of teeth colors were used as well as model and naturally discolored human teeth treated by two bleaching chemical compositions activated by three light sources each: Ar+ and CuBr lasers, and a standard halogen photopolymerization lamp. Typical reflection and fluorescence spectra of some samples are presented; the samples colors were estimated by a standard computer processing in RGB and B coordinates. The results of the applied spectral and colorimetric techniques are in a good agreement with those of the standard computer processing of the corresponding digital photographs and complies with the visually estimated degree of the teeth whiteness judged according to the standard reference scale commonly used in the aesthetic dentistry.
DataView: a computational visualisation system for multidisciplinary design and analysis
NASA Astrophysics Data System (ADS)
Wang, Chengen
2016-01-01
Rapidly processing raw data and effectively extracting underlining information from huge volumes of multivariate data become essential to all decision-making processes in sectors like finance, government, medical care, climate analysis, industries, science, etc. Remarkably, visualisation is recognised as a fundamental technology that props up human comprehension, cognition and utilisation of burgeoning amounts of heterogeneous data. This paper presents a computational visualisation system, named DataView, which has been developed for graphically displaying and capturing outcomes of multiphysics problem-solvers widely used in engineering fields. The DataView is functionally composed of techniques for table/diagram representation, and graphical illustration of scalar, vector and tensor fields. The field visualisation techniques are implemented on the basis of a range of linear and non-linear meshes, which flexibly adapts to disparate data representation schemas adopted by a variety of disciplinary problem-solvers. The visualisation system has been successfully applied to a number of engineering problems, of which some illustrations are presented to demonstrate effectiveness of the visualisation techniques.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
NASA Astrophysics Data System (ADS)
Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie
2017-03-01
Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.
[Cost of therapy for neurodegenerative diseases. Applying an activity-based costing system].
Sánchez-Rebull, María-Victoria; Terceño Gómez, Antonio; Travé Bautista, Angeles
2013-01-01
To apply the activity based costing (ABC) model to calculate the cost of therapy for neurodegenerative disorders in order to improve hospital management and allocate resources more efficiently. We used the case study method in the Francolí long-term care day center. We applied all phases of an ABC system to quantify the cost of the activities developed in the center. We identified 60 activities; the information was collected in June 2009. The ABC system allowed us to calculate the average cost per patient with respect to the therapies received. The most costly and commonly applied technique was psycho-stimulation therapy. Focusing on this therapy and on others related to the admissions process could lead to significant cost savings. ABC costing is a viable method for costing activities and therapies in long-term day care centers because it can be adapted to their structure and standard practice. This type of costing allows the costs of each activity and therapy, or combination of therapies, to be determined and aids measures to improve management. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.
Climate change and health effects in Northwest Alaska.
Brubaker, Michael; Berner, James; Chavan, Raj; Warren, John
2011-01-01
This article provides examples of adverse health effects, including weather-related injury, food insecurity, mental health issues, and water infrastructure damage, and the responses to these effects that are currently being applied in two Northwest Alaska communities. In Northwest Alaska, warming is resulting in a broad range of unusual weather and environmental conditions, including delayed freeze-up, earlier breakup, storm surge, coastal erosion, and thawing permafrost. These are just some of the climate impacts that are driving concerns about weather-related injury, the spread of disease, mental health issues, infrastructure damage, and food and water security. Local leaders are challenged to identify appropriate adaptation strategies to address climate impacts and related health effects. IMPLEMENTATION PROCESS: The tribal health system is combining local observations, traditional knowledge, and western science to perform community-specific climate change health impact assessments. Local leaders are applying this information to develop adaptation responses. The Alaska Native Tribal Health Consortium will describe relationships between climate impacts and health effects and provide examples of community-scaled adaptation actions currently being applied in Northwest Alaska. Climate change is increasing vulnerability to injury, disease, mental stress, food insecurity, and water insecurity. Northwest communities are applying adaptation approaches that are both specific and appropriate. The health impact assessment process is effective in raising awareness, encouraging discussion, engaging partners, and implementing adaptation planning. With community-specific information, local leaders are applying health protective adaptation measures.
Climate change and health effects in Northwest Alaska
Brubaker, Michael; Berner, James; Chavan, Raj; Warren, John
2011-01-01
This article provides examples of adverse health effects, including weather-related injury, food insecurity, mental health issues, and water infrastructure damage, and the responses to these effects that are currently being applied in two Northwest Alaska communities. Background In Northwest Alaska, warming is resulting in a broad range of unusual weather and environmental conditions, including delayed freeze-up, earlier breakup, storm surge, coastal erosion, and thawing permafrost. These are just some of the climate impacts that are driving concerns about weather-related injury, the spread of disease, mental health issues, infrastructure damage, and food and water security. Local leaders are challenged to identify appropriate adaptation strategies to address climate impacts and related health effects. Implementation process The tribal health system is combining local observations, traditional knowledge, and western science to perform community-specific climate change health impact assessments. Local leaders are applying this information to develop adaptation responses. Objective The Alaska Native Tribal Health Consortium will describe relationships between climate impacts and health effects and provide examples of community-scaled adaptation actions currently being applied in Northwest Alaska. Findings Climate change is increasing vulnerability to injury, disease, mental stress, food insecurity, and water insecurity. Northwest communities are applying adaptation approaches that are both specific and appropriate. Conclusion The health impact assessment process is effective in raising awareness, encouraging discussion, engaging partners, and implementing adaptation planning. With community-specific information, local leaders are applying health protective adaptation measures. PMID:22022304
Weigand, Hannah; Weiss, Martina; Cai, Huimin; Li, Yongping; Yu, Lili; Zhang, Christine; Leese, Florian
2018-08-15
Local adaptation is of fundamental importance for populations to cope with fast, human-mediated environmental changes. In the past, analyses of local adaptation were restricted to few model species. Nowadays, due to the increased affordability of high-throughput sequencing, local adaptation can be studied much easier by searching for patterns of positive selection using genomic data. In the present study, we analysed effects of wastewater treatment plant and ore mining effluents on stream invertebrate populations. The two different anthropogenic stressors have impacted on stream ecosystems over different time scales and with different potencies. As target organisms we selected two macroinvertebrate species with different life histories and dispersal capacities: the caddisfly Glossosoma conformis and the flatworm Dugesia gonocephala. We applied a genome-wide genetic marker technique, termed ddRAD (double digest restriction site associated DNA) sequencing, to identify local adaptation. Ten and 18% of all loci were identified as candidate loci for local adaptation in D. gonocephala and G. conformis, respectively. However, after stringent re-evaluation of the genomic data, strong evidence for local adaptation remained only for one population of the flatworm D. gonocephala affected by high copper concentration from ore mining. One of the corresponding candidate loci is arnt, a gene associated with the response to xenobiotics and potentially involved in metal detoxification. Our results support the hypotheses that local adaptation is more likely to play a central role in environments impacted by a stronger stressor for a longer time and that it is more likely to occur in species with lower migration rates. However, these findings have to be interpreted cautiously, as several confounding factors may have limited the possibility to detect local adaptation. Our study highlights how genomic tools can be used to study the adaptability and thus resistance of natural populations to changing environments and we discuss prospects and limitations of the methods. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
St-Georges-Robillard, A.; Masse, M.; Kendall-Dupont, J.; Strupler, M.; Patra, B.; Jermyn, M.; Mes-Masson, A.-M.; Leblond, F.; Gervais, T.
2016-02-01
There is a growing effort in the biomicrosystems community to develop a personalized treatment response assay for cancer patients using primary cells, patient-derived spheroids, or live tissues on-chip. Recently, our group has developed a technique to cut tumors in 350 μm diameter microtissues and keep them alive on-chip, enabling multiplexed in vitro drug assays on primary tumor tissue. Two-photon microscopy, confocal microscopy and flow cytometry are the current standard to assay tissue chemosensitivity on-chip. While these techniques provide microscopic and molecular information, they are not adapted for high-throughput analysis of microtissues. We present a spectroscopic imaging system that allows rapid quantitative measurements of multiple fluorescent viability markers simultaneously by using a liquid crystal tunable filter to record fluorescence and transmittance spectra. As a proof of concept, 24 spheroids composed of ovarian cancer cell line OV90 were formed in a microfluidic chip, stained with two live cell markers (CellTrackerTM Green and Orange), and imaged. Fluorescence images acquired were normalized to the acquisition time and gain of the camera, dark noise was removed, spectral calibration was applied, and spatial uniformity was corrected. Spectral un-mixing was applied to separate each fluorophore's contribution. We have demonstrated that rapid and simultaneous viability measurements on multiple spheroids can be achieved, which will have a significant impact on the prediction of a tumor's response to multiple treatment options. This technique may be applied as well in drug discovery to assess the potential of a drug candidate directly on human primary tissue.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Gurocak, Serhat; De Gier, Robert P E; Feitz, Wouter
2007-03-01
We evaluated the long-term results of autoaugmentation in the pediatric age group and summarized technical adaptations, experimental options and future perspectives for treating these patients. A directed MEDLINE literature review was performed to assess different techniques and alternative options in autoaugmentation procedures. Of 150 studies 49 in the subgroup with the longest duration of followup to show the long-term outcome of the autoaugmentation procedures were chosen for this review. Information gained from these data was reviewed and new perspectives were summarized. Enterocystoplasy is an effective mode of therapy with acceptable morbidity and satisfactory clinical results, although it is major intraperitoneal surgery with various complications and patients need prolonged convalescence to adapt to these surgical procedures. On the other hand, patient selection seems to be the most crucial step for the success of autoaugmentation procedures because the clinical outcome does not appear to be durable. Achievement of better compliance after autoaugmentation procedures seems to be less pronounced and of shorter duration than that of conventional enterocystoplasty. On the other hand, the low morbidity and lack of side effects of bowel integration into the urinary tract are the definite advantages of this technique. It is the responsibility of the physician to determine the balance between the limited efficacy of the procedures vs the definite advantages. Although functionally improved parameters are obtained in tissue engineered autologous bladders, there is an absolute need for additional studies before this challenging technique could be applied widely.
The Direct-Indirect Technique for Composite Restorations Revisited.
Ritter, André V; Fahl, Newton; Vargas, Marcos; Maia, Rodrigo R
2017-06-01
In the direct-indirect composite technique, composite is applied to a nonretentive tooth preparation (eg, a noncarious cervical lesion or a veneer/inlay/onlay preparation) without any bonding agent, sculpted to a primary anatomic form, and light-cured. The partially polymerized restoration is then removed from the preparation and finished and tempered extraorally chairside. The finished inlay is bonded to the preparation using a resin-based luting agent. Advantages of this technique include enhanced physical and mechanical properties afforded by the extraoral chairside tempering process because of increased monomer conversion, and greater operator control over the final marginal adaptation, surface finishing and polishing, and anatomy of the restoration, given that these elements are defined outside of the patient's mouth. The direct-indirect approach also affords enhanced gingival health and patient comfort. This article presents a clinical case in which the direct-indirect composite technique was used to restore three noncarious cervical lesions on the same quadrant on an adult patient. Clinical steps and tips for success are offered. The authors also present scanning electron microscope and atomic force microscope images showing the excellent marginal fit obtained with the direct-indirect composite technique.
Educating Physicists for the 21st Century Industrial Arena
NASA Astrophysics Data System (ADS)
Levine, Alaina G.
2001-03-01
At the University of Arizona, a new Professional Master's Degree in Applied and Industrial Physics has been initiated to meet the demands of a new industrial era. A 1995 report by the National Academy of Sciences, et al, concluded, "A world of work that has become more interdisciplinary, collaborative, and global requires that we produce young people who are adaptable and flexible, as well as technically proficient." To better prepare students for this new "world of work", a new degree was launched in 2000 sponsored by the Sloan Foundation as part of a national initiative. The Professional Master's Degree in Applied and Industrial Physics educates students to 1) work in interdisciplinary teams on complex problems involving rapidly changing science and technology, 2) gain proficiency in computational techniques, 3) effectively communicate their scientific mission at all levels, and 4) understand business and legal issues associated with their scientific projects. I will discuss these goals, the roles of our industrial partners, and Arizona's parallel programs in Applied Biosciences and Mathematical Sciences.
Zheng, Shiqi; Tang, Xiaoqi; Song, Bao; Lu, Shaowu; Ye, Bosheng
2013-07-01
In this paper, a stable adaptive PI control strategy based on the improved just-in-time learning (IJITL) technique is proposed for permanent magnet synchronous motor (PMSM) drive. Firstly, the traditional JITL technique is improved. The new IJITL technique has less computational burden and is more suitable for online identification of the PMSM drive system which is highly real-time compared to traditional JITL. In this way, the PMSM drive system is identified by IJITL technique, which provides information to an adaptive PI controller. Secondly, the adaptive PI controller is designed in discrete time domain which is composed of a PI controller and a supervisory controller. The PI controller is capable of automatically online tuning the control gains based on the gradient descent method and the supervisory controller is developed to eliminate the effect of the approximation error introduced by the PI controller upon the system stability in the Lyapunov sense. Finally, experimental results on the PMSM drive system show accurate identification and favorable tracking performance. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Adaptations of advanced safety and reliability techniques to petroleum and other industries
NASA Technical Reports Server (NTRS)
Purser, P. E.
1974-01-01
The underlying philosophy of the general approach to failure reduction and control is presented. Safety and reliability management techniques developed in the industries which have participated in the U.S. space and defense programs are described along with adaptations to nonaerospace activities. The examples given illustrate the scope of applicability of these techniques. It is indicated that any activity treated as a 'system' is a potential user of aerospace safety and reliability management techniques.
The micro-mirror technology applied to astronomy: ANIS adaptive-slit near Infrared spectrograph
NASA Astrophysics Data System (ADS)
Burgarella, Denis; Buat, Veronique; Bely, Pierre; Grange, Robert
2018-04-01
This paper, "The micro-mirror technology applied to astronomy: ANIS adaptive-slit near Infrared spectrograph," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.
Issues and Challenges in the Design of Culturally Adapted Evidence-Based Interventions
Castro, Felipe González; Barrera, Manuel; Holleran Steiker, Lori K.
2014-01-01
This article examines issues and challenges in the design of cultural adaptations that are developed from an original evidence-based intervention (EBI). Recently emerging multistep frameworks or stage models are examined, as these can systematically guide the development of culturally adapted EBIs. Critical issues are also presented regarding whether and how such adaptations may be conducted, and empirical evidence is presented regarding the effectiveness of such cultural adaptations. Recent evidence suggests that these cultural adaptations are effective when applied with certain subcultural groups, although they are less effective when applied with other subcultural groups. Generally, current evidence regarding the effectiveness of cultural adaptations is promising but mixed. Further research is needed to obtain more definitive conclusions regarding the efficacy and effectiveness of culturally adapted EBIs. Directions for future research and recommendations are presented to guide the development of a new generation of culturally adapted EBIs. PMID:20192800
Popescu, Anda; Barlow, Steven; Venkatesan, Lalit; Wang, Jingyan; Popescu, Mihai
2014-01-01
Cortical adaptation in the primary somatosensory cortex (SI) has been probed using different stimulation modalities and recording techniques, in both human and animal studies. In contrast, considerably less knowledge has been gained about the adaptation profiles in other areas of the cortical somatosensory network. Using magnetoencephalography, we examined the patterns of short-term adaptation for evoked responses in SI and somatosensory association areas during tactile stimulation applied to the glabrous skin of the right hand. Cutaneous stimuli were delivered as trains of serial pulses with a constant frequency of 2 Hz and 4 Hz in separate runs, and a constant inter-train interval of 5 s. The unilateral stimuli elicited transient responses to the serial pulses in the train, with several response components that were separated by Independent Component Analysis. Subsequent neuromagnetic source reconstruction identified regional generators in the contralateral SI and somatosensory association areas in the posterior parietal cortex (PPC). Activity in the bilateral secondary somatosensory cortex (i.e. SII/PV) was also identified, although less consistently across subjects. The dynamics of the evoked activity in each area and the frequency-dependent adaptation effects were assessed from the changes in the relative amplitude of serial responses in each train. We show that the adaptation profiles in SI and PPC can be quantitatively characterized from neuromagnetic recordings using tactile stimulation, with the sensitivity to repetitive stimulation increasing from SI to PPC. A similar approach for SII/PV has proven less straightforward, potentially due to the selective nature of these areas to respond predominantly to certain stimuli. PMID:22331631
Chang, H.-C.; Kopaska-Merkel, D. C.; Chen, H.-C.; Rocky, Durrans S.
2000-01-01
Lithofacies identification supplies qualitative information about rocks. Lithofacies represent rock textures and are important components of hydrocarbon reservoir description. Traditional techniques of lithofacies identification from core data are costly and different geologists may provide different interpretations. In this paper, we present a low-cost intelligent system consisting of three adaptive resonance theory neural networks and a rule-based expert system to consistently and objectively identify lithofacies from well-log data. The input data are altered into different forms representing different perspectives of observation of lithofacies. Each form of input is processed by a different adaptive resonance theory neural network. Among these three adaptive resonance theory neural networks, one neural network processes the raw continuous data, another processes categorial data, and the third processes fuzzy-set data. Outputs from these three networks are then combined by the expert system using fuzzy inference to determine to which facies the input data should be assigned. Rules are prioritized to emphasize the importance of firing order. This new approach combines the learning ability of neural networks, the adaptability of fuzzy logic, and the expertise of geologists to infer facies of the rocks. This approach is applied to the Appleton Field, an oil field located in Escambia County, Alabama. The hybrid intelligence system predicts lithofacies identity from log data with 87.6% accuracy. This prediction is more accurate than those of single adaptive resonance theory networks, 79.3%, 68.0% and 66.0%, using raw, fuzzy-set, and categorical data, respectively, and by an error-backpropagation neural network, 57.3%. (C) 2000 Published by Elsevier Science Ltd. All rights reserved.
Noel, Martin; Fortin, Karine; Bouyer, Laurent J
2009-01-01
Background Adapting to external forces during walking has been proposed as a tool to improve locomotion after central nervous system injury. However, sensorimotor integration during walking varies according to the timing in the gait cycle, suggesting that adaptation may also depend on gait phases. In this study, an ElectroHydraulic AFO (EHO) was used to apply forces specifically during mid-stance and push-off to evaluate if feedforward movement control can be adapted in these 2 gait phases. Methods Eleven healthy subjects walked on a treadmill before (3 min), during (5 min) and after (5 min) exposure to 2 force fields applied by the EHO (mid-stance/push-off; ~10 Nm, towards dorsiflexion). To evaluate modifications in feedforward control, strides with no force field ('catch strides') were unexpectedly inserted during the force field walking period. Results When initially exposed to a mid-stance force field (FF20%), subjects showed a significant increase in ankle dorsiflexion velocity. Catches applied early into the FF20% were similar to baseline (P > 0.99). Subjects gradually adapted by returning ankle velocity to baseline over ~50 strides. Catches applied thereafter showed decreased ankle velocity where the force field was normally applied, indicating the presence of feedforward adaptation. When initially exposed to a push-off force field (FF50%), plantarflexion velocity was reduced in the zone of force field application. No adaptation occurred over the 5 min exposure. Catch strides kinematics remained similar to control at all times, suggesting no feedforward adaptation. As a control, force fields assisting plantarflexion (-3.5 to -9.5 Nm) were applied and increased ankle plantarflexion during push-off, confirming that the lack of kinematic changes during FF50% catch strides were not simply due to a large ankle impedance. Conclusion Together these results show that ankle exoskeletons such as the EHO can be used to study phase-specific adaptive control of the ankle during locomotion. Our data suggest that, for short duration exposure, a feedforward modification in torque output occurs during mid-stance but not during push-off. These findings are important for the design of novel rehabilitation methods, as they suggest that the ability to use resistive force fields for training may depend on targeted gait phases. PMID:19493356
Noel, Martin; Fortin, Karine; Bouyer, Laurent J
2009-06-03
Adapting to external forces during walking has been proposed as a tool to improve locomotion after central nervous system injury. However, sensorimotor integration during walking varies according to the timing in the gait cycle, suggesting that adaptation may also depend on gait phases. In this study, an ElectroHydraulic AFO (EHO) was used to apply forces specifically during mid-stance and push-off to evaluate if feedforward movement control can be adapted in these 2 gait phases. Eleven healthy subjects walked on a treadmill before (3 min), during (5 min) and after (5 min) exposure to 2 force fields applied by the EHO (mid-stance/push-off; approximately 10 Nm, towards dorsiflexion). To evaluate modifications in feedforward control, strides with no force field ('catch strides') were unexpectedly inserted during the force field walking period. When initially exposed to a mid-stance force field (FF 20%), subjects showed a significant increase in ankle dorsiflexion velocity. Catches applied early into the FF 20% were similar to baseline (P > 0.99). Subjects gradually adapted by returning ankle velocity to baseline over approximately 50 strides. Catches applied thereafter showed decreased ankle velocity where the force field was normally applied, indicating the presence of feedforward adaptation. When initially exposed to a push-off force field (FF 50%), plantarflexion velocity was reduced in the zone of force field application. No adaptation occurred over the 5 min exposure. Catch strides kinematics remained similar to control at all times, suggesting no feedforward adaptation. As a control, force fields assisting plantarflexion (-3.5 to -9.5 Nm) were applied and increased ankle plantarflexion during push-off, confirming that the lack of kinematic changes during FF 50% catch strides were not simply due to a large ankle impedance. Together these results show that ankle exoskeletons such as the EHO can be used to study phase-specific adaptive control of the ankle during locomotion. Our data suggest that, for short duration exposure, a feedforward modification in torque output occurs during mid-stance but not during push-off. These findings are important for the design of novel rehabilitation methods, as they suggest that the ability to use resistive force fields for training may depend on targeted gait phases.
Adaptive automation of human-machine system information-processing functions.
Kaber, David B; Wright, Melanie C; Prinzel, Lawrence J; Clamann, Michael P
2005-01-01
The goal of this research was to describe the ability of human operators to interact with adaptive automation (AA) applied to various stages of complex systems information processing, defined in a model of human-automation interaction. Forty participants operated a simulation of an air traffic control task. Automated assistance was adaptively applied to information acquisition, information analysis, decision making, and action implementation aspects of the task based on operator workload states, which were measured using a secondary task. The differential effects of the forms of automation were determined and compared with a manual control condition. Results of two 20-min trials of AA or manual control revealed a significant effect of the type of automation on performance, particularly during manual control periods as part of the adaptive conditions. Humans appear to better adapt to AA applied to sensory and psychomotor information-processing functions (action implementation) than to AA applied to cognitive functions (information analysis and decision making), and AA is superior to completely manual control. Potential applications of this research include the design of automation to support air traffic controller information processing.
Mihelcic, James R; Zimmerman, Julie B; Ramaswami, Anu
2007-05-15
Sustainable development in both the developed and developing world has the common fundamental themes of advancing economic and social prosperity while protecting and restoring natural systems. While many recent efforts have been undertaken to transfer knowledge from the developed to the developing world to achieve a more sustainable future, indigenous knowledge that often originates in developing nations also can contribute significantly to this global dialogue. Selected case studies are presented to describe important knowledge, methodologies, techniques, principles, and practices for sustainable development emerging from developing countries in two critical challenge areas to sustainability: water and energy. These, with additional analysis and quantification, can be adapted and expanded for transfer throughout the developed and developing world in advancing sustainability. A common theme in all of the case studies presented is the integration of natural processes and material flows into the anthropogenic system. Some of these techniques, originating in rural settings, have recently been adapted for use in cities, which is especially important as the global trend of urban population growth accelerates. Innovations in science and technology, specifically applied to two critical issues of today, water and energy, are expected to fundamentally shift the type and efficiency of energy and materials utilized to advance prosperity while protecting and restoring natural systems.
Applying Behavior-Based Robotics Concepts to Telerobotic Use of Power Tooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noakes, Mark W; Hamel, Dr. William R.
While it has long been recognized that telerobotics has potential advantages to reduce operator fatigue, to permit lower skilled operators to function as if they had higher skill levels, and to protect tools and manipulators from excessive forces during operation, relatively little laboratory research in telerobotics has actually been implemented in fielded systems. Much of this has to do with the complexity of the implementation and its lack of ability to operate in complex unstructured remote systems environments. One possible solution is to approach the tooling task using an adaptation of behavior-based techniques to facilitate task decomposition to a simplermore » perspective and to provide sensor registration to the task target object in the field. An approach derived from behavior-based concepts has been implemented to provide automated tool operation for a teleoperated manipulator system. The generic approach is adaptable to a wide range of typical remote tools used in hot-cell and decontamination and dismantlement-type operations. Two tasks are used in this work to test the validity of the concept. First, a reciprocating saw is used to cut a pipe. The second task is bolt removal from mockup process equipment. This paper explains the technique, its implementation, and covers experimental data, analysis of results, and suggestions for implementation on fielded systems.« less
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
A new adaptive mesh refinement strategy for numerically solving evolutionary PDE's
NASA Astrophysics Data System (ADS)
Burgarelli, Denise; Kischinhevsky, Mauricio; Biezuner, Rodney Josue
2006-11-01
A graph-based implementation of quadtree meshes for dealing with adaptive mesh refinement (AMR) in the numerical solution of evolutionary partial differential equations is discussed using finite volume methods. The technique displays a plug-in feature that allows replacement of a group of cells in any region of interest for another one with arbitrary refinement, and with only local changes occurring in the data structure. The data structure is also specially designed to minimize the number of operations needed in the AMR. Implementation of the new scheme allows flexibility in the levels of refinement of adjacent regions. Moreover, storage requirements and computational cost compare competitively with mesh refinement schemes based on hierarchical trees. Low storage is achieved for only the children nodes are stored when a refinement takes place. These nodes become part of a graph structure, thus motivating the denomination autonomous leaves graph (ALG) for the new scheme. Neighbors can then be reached without accessing their parent nodes. Additionally, linear-system solvers based on the minimization of functionals can be easily employed. ALG was not conceived with any particular problem or geometry in mind and can thus be applied to the study of several phenomena. Some test problems are used to illustrate the effectiveness of the technique.
Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P
2018-01-01
The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.
Drought in the Horn of Africa: attribution of a damaging and repeating extreme event
NASA Astrophysics Data System (ADS)
Marthews, Toby; Otto, Friederike; Mitchell, Daniel; Dadson, Simon; Jones, Richard
2015-04-01
We have applied detection and attribution techniques to the severe drought that hit the Horn of Africa in 2014. The short rains failed in late 2013 in Kenya, South Sudan, Somalia and southern Ethiopia, leading to a very dry growing season January to March 2014, and subsequently to the current drought in many agricultural areas of the sub-region. We have made use of the weather@home project, which uses publicly-volunteered distributed computing to provide a large ensemble of simulations sufficient to sample regional climate uncertainty. Based on this, we have estimated the occurrence rates of the kinds of the rare and extreme events implicated in this large-scale drought. From land surface model runs based on these ensemble simulations, we have estimated the impacts of climate anomalies during this period and therefore we can reliably identify some factors of the ongoing drought as attributable to human-induced climate change. The UNFCCC's Adaptation Fund is attempting to support projects that bring about an adaptation to "the adverse effects of climate change", but in order to formulate such projects we need a much clearer way to assess how much climate change is human-induced and how much is a consequence of climate anomalies and large-scale teleconnections, which can only be provided by robust attribution techniques.
Planning Coverage Campaigns for Mission Design and Analysis: CLASP for DESDynl
NASA Technical Reports Server (NTRS)
Knight, Russell L.; McLaren, David A.; Hu, Steven
2013-01-01
Mission design and analysis presents challenges in that almost all variables are in constant flux, yet the goal is to achieve an acceptable level of performance against a concept of operations, which might also be in flux. To increase responsiveness, automated planning tools are used that allow for the continual modification of spacecraft, ground system, staffing, and concept of operations, while returning metrics that are important to mission evaluation, such as area covered, peak memory usage, and peak data throughput. This approach was applied to the DESDynl mission design using the CLASP planning system, but since this adaptation, many techniques have changed under the hood for CLASP, and the DESDynl mission concept has undergone drastic changes. The software produces mission evaluation products, such as memory highwater marks, coverage percentages, given a mission design in the form of coverage targets, concept of operations, spacecraft parameters, and orbital parameters. It tries to overcome the lack of fidelity and timeliness of mission requirements coverage analysis during mission design. Previous techniques primarily use Excel in ad hoc fashion to approximate key factors in mission performance, often falling victim to overgeneralizations necessary in such an adaptation. The new program allows designers to faithfully represent their mission designs quickly, and get more accurate results just as quickly.
Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R
2016-01-21
The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Structural optimization of dental restorations using the principle of adaptive growth.
Couegnat, Guillaume; Fok, Siu L; Cooper, Jonathan E; Qualtrough, Alison J E
2006-01-01
In a restored tooth, the stresses that occur at the tooth-restoration interface during loading could become large enough to fracture the tooth and/or restoration and it has been estimated that 92% of fractured teeth have been previously restored. The tooth preparation process for a dental restoration is a classical optimization problem: tooth reduction must be minimized to preserve tooth tissue whilst stress levels must be kept low to avoid fracture of the restored unit. The objective of the present study was to derive alternative optimized designs for a second upper premolar cavity preparation by means of structural shape optimization based on the finite element method and biological adaptive growth. Three models of cavity preparations were investigated: an inlay design for preparation of a premolar tooth, an undercut cavity design and an onlay preparation. Three restorative materials and several tooth/restoration contact conditions were utilized to replicate the in vitro situation as closely as possible. The optimization process was run for each cavity geometry. Mathematical shape optimization based on biological adaptive growth process was successfully applied to tooth preparations for dental restorations. Significant reduction in stress levels at the tooth-restoration interface where bonding is imperfect was achieved using optimized cavity or restoration shapes. In the best case, the maximum stress value was reduced by more than 50%. Shape optimization techniques can provide an efficient and effective means of reducing the stresses in restored teeth and hence has the potential of prolonging their service lives. The technique can easily be adopted for optimizing other dental restorations.
NASA Astrophysics Data System (ADS)
Conallin, John; McLoughlin, Craig A.; Campbell, Josh; Knight, Roger; Bright, Troy; Fisher, Ian
2018-03-01
The complex nature of freshwater systems provides challenges for incorporating evidence-based techniques into management. This paper investigates the potential of participatory evidence-based techniques to involve local stakeholders and make decisions based on different "knowledge" sources within adaptive management programs. It focuses on the application of thresholds of potential concern (TPC) within strategic adaptive management (SAM) for facilitating inclusive decision-making. The study is based on the case of the Edward-Wakool (E-W) "Fish and Flows" SAM project in the Murray-Darling River Basin, Australia. We demonstrate the application of TPCs for improving collaborative decision-making within the E-W, associated with environmental watering requirements, and other natural resource management programs such as fish stocking. The development of TPCs in the E-W fish and flows SAM project helped improve stakeholder involvement and understanding of the system, and also the effectiveness of the implemented management interventions. TPCs ultimately helped inform environmental flow management activities. The TPC process complemented monitoring that was already occurring in the system and provided a mechanism for linking formal and informal knowledge to form explicit and measurable endpoints from objectives. The TPC process faced challenges due to the perceived reduction in scientific rigor within initial TPC development and use. However, TPCs must remain tangible to managers and other stakeholders, in order to aid in the implementation of adaptive management. Once accepted by stakeholders, over time TPCs should be reviewed and refined in order to increase their scientific rigor, as new information is generated.
Underwater Acoustic Propagation and Communications: A Coupled Research Program
2015-06-15
coding technique suitable for both SIMO and MIMO systems. 4. an adaptive OFDM modulation technique, whereby the transmitter acts in response to...timate based adaptation for SIMO and MIMO systems in a interactive turbo-equalization framework were developed and analyzed. MIMO and SISO
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
Liu, Zongcheng; Dong, Xinmin; Xue, Jianping; Li, Hongbo; Chen, Yong
2016-09-01
This brief addresses the adaptive control problem for a class of pure-feedback systems with nonaffine functions possibly being nondifferentiable. Without using the mean value theorem, the difficulty of the control design for pure-feedback systems is overcome by modeling the nonaffine functions appropriately. With the help of neural network approximators, an adaptive neural controller is developed by combining the dynamic surface control (DSC) and minimal learning parameter (MLP) techniques. The key features of our approach are that, first, the restrictive assumptions on the partial derivative of nonaffine functions are removed, second, the DSC technique is used to avoid "the explosion of complexity" in the backstepping design, and the number of adaptive parameters is reduced significantly using the MLP technique, third, smooth robust compensators are employed to circumvent the influences of approximation errors and disturbances. Furthermore, it is proved that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded. Finally, the simulation results are provided to demonstrate the effectiveness of the designed method.
An airborne low SWaP-C UAS sense and avoid system
NASA Astrophysics Data System (ADS)
Wang, Zhonghai; Lin, Xingping; Xiang, Xingyu; Blasch, Erik; Pham, Khanh; Chen, Genshe; Shen, Dan; Jia, Bin; Wang, Gang
2016-05-01
This paper presents a low size, weight and power - cost (SWaP-C) airborne sense and avoid (ABSAA) system, which is based on a linear frequency modulated continuous wave (LFMCW) radar and can be mounted on small unmanned aircraft system (UAS). The system satisfies the constraint of the available sources on group 2/3 UAS. To obtain the desired sense and avoid range, a narrow band frequency (or range) scanning technique is applied for reducing the receiver's noise floor to improve its sensitivity, and a digital signal integration with fast Fourier transform (FFT) is applied to enhance the signal to noise ratio (SNR). The gate length and chirp rate are intelligently adapted to not only accommodate different object distances, speeds and approaching angle conditions, but also optimize the detection speed, resolution and coverage range. To minimize the radar blind zone, a higher chirp rate and a narrowband intermediate frequency (IF) filter are applied at the near region with a single antenna signal for target detection. The offset IF frequency between transmitter (TX) and receiver (RX) is designed to mitigate the TX leakage to the receiver, especially at close distances. Adaptive antenna gain and beam-width are utilized for searching at far distance and fast 360 degree middle range. For speeding up the system update rate, lower chirp rates and wider IF and baseband filters are applied for obtaining larger range scanning step length out of the near region. To make the system working with a low power transmitter (TX), multiple-antenna beamforming, digital signal integration with FFT, and a much narrower receiver (RX) bandwidth are applied at the far region. The ABSAA system working range is 2 miles with a 1W transmitter and single antenna signal detection, and it is 5 miles when a 5W transmitter and 4-antenna beamforming (BF) are applied.
Papadiochou, Sofia; Pissiotis, Argirios L
2018-04-01
The comparative assessment of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and other fabrication techniques pertaining to marginal adaptation should be documented. Limited evidence exists on the effect of restorative material on the performance of a CAD-CAM system relative to marginal adaptation. The purpose of this systematic review was to investigate whether the marginal adaptation of CAD-CAM single crowns, fixed dental prostheses, and implant-retained fixed dental prostheses or their infrastructures differs from that obtained by other fabrication techniques using a similar restorative material and whether it depends on the type of restorative material. An electronic search of English-language literature published between January 1, 2000, and June 30, 2016, was conducted of the Medline/PubMed database. Of the 55 included comparative studies, 28 compared CAD-CAM technology with conventional fabrication techniques, 12 contrasted CAD-CAM technology and copy milling, 4 compared CAD-CAM milling with direct metal laser sintering (DMLS), and 22 investigated the performance of a CAD-CAM system regarding marginal adaptation in restorations/infrastructures produced with different restorative materials. Most of the CAD-CAM restorations/infrastructures were within the clinically acceptable marginal discrepancy (MD) range. The performance of a CAD-CAM system relative to marginal adaptation is influenced by the restorative material. Compared with CAD-CAM, most of the heat-pressed lithium disilicate crowns displayed equal or smaller MD values. Slip-casting crowns exhibited similar or better marginal accuracy than those fabricated with CAD-CAM. Cobalt-chromium and titanium implant infrastructures produced using a CAD-CAM system elicited smaller MD values than zirconia. The majority of cobalt-chromium restorations/infrastructures produced by DMLS displayed better marginal accuracy than those fabricated with the casting technique. Compared with copy milling, the majority of zirconia restorations/infrastructures produced by CAD-CAM milling exhibited better marginal adaptation. No clear conclusions can be drawn about the superiority of CAD-CAM milling over the casting technique and DMLS regarding marginal adaptation. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Neural Networks for Flight Control
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1996-01-01
Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.
Conventional vs Biomimetic Approaches to the Exploration of Mars
NASA Astrophysics Data System (ADS)
Ellery, A.
It is not usual to refer to convention in planetary exploration missions by virtue of the innovation required for such projects. The term conventional refers to the methodologies, tools and approaches typically adopted in engineering that are applied to such missions. Presented is a "conventional" Mars rover mission in which the author was involved - ExoMars - into which is interspersed references to examples where biomimetic approaches may yield superior capabilities. Biomimetics is a relatively recently active area of research which seeks to examine how biological systems solve the problem of survival in the natural environment. Biological organisms are autonomous entities that must survive in a hostile world adapting both adaptivity and robustness. It is not then surprising that biomimetics is particularly useful when applied to robotic elements of a Mars exploration mission. I present a number of areas in which biomimetics may yield new solutions to the problem of Mars exploration - optic flow navigation, potential field navigation, genetically-evolved neuro-controllers, legged locomotion, electric motors implementing muscular behaviour, and a biomimetic drill based on the wood wasp ovipositor. Each of these techniques offers an alternative approach to conventional ones. However, the perceptive hurdles are likely to dwarf the technical hurdles in implementing many of these methods in the near future.
Colom, Roberto; Hua, Xue; Martínez, Kenia; Burgaleta, Miguel; Román, Francisco J.; Gunter, Jeffrey L.; Carmona, Susanna; Jaeggi, Susanne M.; Thompson, Paul M.
2016-01-01
Tensor-Based Morphometry (TBM) allows the automatic mapping of brain changes across time building 3D deformation maps. This technique has been applied for tracking brain degeneration in Alzheimer's and other neurodegenerative diseases with high sensitivity and reliability. Here we applied TBM to quantify changes in brain structure after completing a challenging adaptive cognitive training program based on the n-back task. Twenty-six young women completed twenty-four training sessions across twelve weeks and they showed, on average, large cognitive improvements. High-resolution MRI scans were obtained before and after training. The computed longitudinal deformation maps were analyzed for answering three questions: (a) Are there differential brain structural changes in the training group as compared with a matched control group? (b) Are these changes related to performance differences in the training program? (c) Are standardized changes in a set of psychological factors (fluid and crystallized intelligence, working memory, and attention control) measured before and after training, related to structural changes in the brain? Results showed (a) greater structural changes for the training group in the temporal lobe, (b) a negative correlation between these changes and performance across training sessions (the greater the structural change, the lower the cognitive performance improvements), and (c) negligible effects regarding the psychological factors measured before and after training. PMID:27477628
User Interface Design in Medical Distributed Web Applications.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
2016-01-01
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan
2013-06-01
There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.
NASA Astrophysics Data System (ADS)
Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto
2008-03-01
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.
Genomic screening for blood-borne viruses in transfusion settings.
Allain, J P
2000-02-01
The residual risk of post-transfusion human immunodeficiency virus (HIV) infection is low but slightly higher for hepatitis B virus (HBV) and hepatitis C virus (HCV), the main reason being viraemia during the window period preceding antibody or antigen detection by enzyme immunoassays. Immunosilent-infected individuals and carriers of distant viral variants also play an unquantifiable role. Multiple techniques, e.g. reverse transcription-polymerase chain reaction (RT-PCR), PCR, ligase-chain reaction, nucleic acid sequence-based amplification (NASBA) and transcription-mediated amplification (TMA) have been developed to amplify and detect viral genomes as single or multiplex assays. Equipment providing various degrees of automation has been adapted to these techniques. Applying nucleic acid amplification techniques (NAT) to blood screening, two main approaches have been advocated: plasma pool and single-donation testing. Pool testing presents the advantage of lower cost and readily available equipment although it is prone to false negative and positive reactions. The time required to identify infected donations is incompatible with blood component release, and may lead to product waste. Single-unit testing, although appealing, is not yet fully automated and potentially very costly unless a systematic multiplex approach is taken. Although technically feasible, NAT applied to the blood supply needs to be clinically evaluated and its cost efficiency assessed in the general public health context. However, pool NAT is currently implemented in continental Europe and the USA.
Measurement of dissolved oxygen during red wines tank aging with chips and micro-oxygenation.
Nevares, I; del Alamo, M
2008-07-21
Nowadays, micro-oxygenation is a very important technique used in aging wines in order to improve their characteristics. The techniques of wine tank aging imply the use of small doses of oxygen and the addition of wood pieces of oak to the wine. Considering the low dissolved oxygen (DO) levels used by micro-oxygenation technique it is necessary to choose the appropriate measurement principle to apply the precise oxygen dosage in wine at any time, in order to assure its correct assimilation. This knowledge will allow the oenologist to control and run the wine aging correctly. This work is a thorough revision of DO measurement main technologies applied to oenology. It describes the strengths and weaknesses of each of them, and draws a comparison of their workings in wine measurement. Both, the traditional systems by electrochemical probes, and the newest photoluminescence-based probes have been used. These probes adapted to red wines ageing study are then compared. This paper also details the first results of the dissolved oxygen content evolution in red wines during a traditional and alternative tank aging. Samples have been treated by three different ageing systems: oak barrels, stainless-steel tanks with small oak wood pieces (chips) and with bigger oak pieces (staves) with low micro-oxygenation levels. French and American oak barrels manufactured by the same cooperage have been used.
Jaeger, Daniel; Pilger, Christian; Hachmeister, Henning; Oberländer, Elina; Wördenweber, Robin; Wichmann, Julian; Mussgnug, Jan H.; Huser, Thomas; Kruse, Olaf
2016-01-01
Oleaginous photosynthetic microalgae hold great promise as non-food feedstocks for the sustainable production of bio-commodities. The algal lipid quality can be analysed by Raman micro-spectroscopy, and the lipid content can be imaged in vivo in a label-free and non-destructive manner by coherent anti-Stokes Raman scattering (CARS) microscopy. In this study, both techniques were applied to the oleaginous microalga Monoraphidium neglectum, a biotechnologically promising microalga resistant to commonly applied lipid staining techniques. The lipid-specific CARS signal was successfully separated from the interfering two-photon excited fluorescence of chlorophyll and for the first time, lipid droplet formation during nitrogen starvation could directly be analysed. We found that the neutral lipid content deduced from CARS image analysis strongly correlated with the neutral lipid content measured gravimetrically and furthermore, that the relative degree of unsaturation of fatty acids stored in lipid droplets remained similar. Interestingly, the lipid profile during cellular adaption to nitrogen starvation showed a two-phase characteristic with initially fatty acid recycling and subsequent de novo lipid synthesis. This works demonstrates the potential of quantitative CARS microscopy as a label-free lipid analysis technique for any microalgal species, which is highly relevant for future biotechnological applications and to elucidate the process of microalgal lipid accumulation. PMID:27767024
Axial range of conjugate adaptive optics in two-photon microscopy
Paudel, Hari P.; Taranto, John; Mertz, Jerome; Bifano, Thomas
2015-01-01
We describe an adaptive optics technique for two-photon microscopy in which the deformable mirror used for aberration compensation is positioned in a plane conjugate to the plane of the aberration. We demonstrate in a proof-of-principle experiment that this technique yields a large field of view advantage in comparison to standard pupil-conjugate adaptive optics. Further, we show that the extended field of view in conjugate AO is maintained over a relatively large axial translation of the deformable mirror with respect to the conjugate plane. We conclude with a discussion of limitations and prospects for the conjugate AO technique in two-photon biological microscopy. PMID:26367938