Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
A semi-active suspension control algorithm for vehicle comprehensive vertical dynamics performance
NASA Astrophysics Data System (ADS)
Nie, Shida; Zhuang, Ye; Liu, Weiping; Chen, Fan
2017-08-01
Comprehensive performance of the vehicle, including ride qualities and road-holding, is essentially of great value in practice. Many up-to-date semi-active control algorithms improve vehicle dynamics performance effectively. However, it is hard to improve comprehensive performance for the conflict between ride qualities and road-holding around the second-order resonance. Hence, a new control algorithm is proposed to achieve a good trade-off between ride qualities and road-holding. In this paper, the properties of the invariant points are analysed, which gives an insight into the performance conflicting around the second-order resonance. Based on it, a new control algorithm is proposed. The algorithm employs a novel frequency selector to balance suspension ride and handling performance by adopting a medium damping around the second-order resonance. The results of this study show that the proposed control algorithm could improve the performance of ride qualities and suspension working space up to 18.3% and 8.2%, respectively, with little loss of road-holding compared to the passive suspension. Consequently, the comprehensive performance can be improved by 6.6%. Hence, the proposed algorithm is of great potential to be implemented in practice.
NASA Technical Reports Server (NTRS)
Robinson, Michael; Steiner, Matthias; Wolff, David B.; Ferrier, Brad S.; Kessinger, Cathy; Einaudi, Franco (Technical Monitor)
2000-01-01
The primary function of the TRMM Ground Validation (GV) Program is to create GV rainfall products that provide basic validation of satellite-derived precipitation measurements for select primary sites. A fundamental and extremely important step in creating high-quality GV products is radar data quality control. Quality control (QC) processing of TRMM GV radar data is based on some automated procedures, but the current QC algorithm is not fully operational and requires significant human interaction to assure satisfactory results. Moreover, the TRMM GV QC algorithm, even with continuous manual tuning, still can not completely remove all types of spurious echoes. In an attempt to improve the current operational radar data QC procedures of the TRMM GV effort, an intercomparison of several QC algorithms has been conducted. This presentation will demonstrate how various radar data QC algorithms affect accumulated radar rainfall products. In all, six different QC algorithms will be applied to two months of WSR-88D radar data from Melbourne, Florida. Daily, five-day, and monthly accumulated radar rainfall maps will be produced for each quality-controlled data set. The QC algorithms will be evaluated and compared based on their ability to remove spurious echoes without removing significant precipitation. Strengths and weaknesses of each algorithm will be assessed based on, their abilit to mitigate both erroneous additions and reductions in rainfall accumulation from spurious echo contamination and true precipitation removal, respectively. Contamination from individual spurious echo categories will be quantified to further diagnose the abilities of each radar QC algorithm. Finally, a cost-benefit analysis will be conducted to determine if a more automated QC algorithm is a viable alternative to the current, labor-intensive QC algorithm employed by TRMM GV.
Quality control algorithms for rainfall measurements
NASA Astrophysics Data System (ADS)
Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs
2005-09-01
One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2004-01-01
TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.
An analysis of a candidate control algorithm for a ride quality augmentation system
NASA Technical Reports Server (NTRS)
Suikat, Reiner; Donaldson, Kent; Downing, David R.
1987-01-01
This paper presents a detailed analysis of a candidate algorithm for a ride quality augmentation system. The algorithm consists of a full-state feedback control law based on optimal control output weighting, estimators for angle of attack and sideslip, and a maneuvering algorithm. The control law is shown to perform well by both frequency and time domain analysis. The rms vertical acceleration is reduced by about 40 percent over the whole mission flight envelope. The estimators for the angle of attack and sideslip avoid the often inaccurate or costly direct measurement of those angles. The maneuvering algorithm will allow the augmented airplane to respond to pilot inputs. The design characteristics and performance are documented by the closed-loop eigenvalues; rms levels of vertical, lateral, and longitudinal acceleration; and representative time histories and frequency response.
A multiple objective optimization approach to quality control
NASA Technical Reports Server (NTRS)
Seaman, Christopher Michael
1991-01-01
The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.
PSO Algorithm for an Optimal Power Controller in a Microgrid
NASA Astrophysics Data System (ADS)
Al-Saedi, W.; Lachowicz, S.; Habibi, D.; Bass, O.
2017-07-01
This paper presents the Particle Swarm Optimization (PSO) algorithm to improve the quality of the power supply in a microgrid. This algorithm is proposed for a real-time selftuning method that used in a power controller for an inverter based Distributed Generation (DG) unit. In such system, the voltage and frequency are the main control objectives, particularly when the microgrid is islanded or during load change. In this work, the PSO algorithm is implemented to find the optimal controller parameters to satisfy the control objectives. The results show high performance of the applied PSO algorithm of regulating the microgrid voltage and frequency.
Real-time motion-based H.263+ frame rate control
NASA Astrophysics Data System (ADS)
Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay
1998-12-01
Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.
Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow
NASA Astrophysics Data System (ADS)
Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun
2018-04-01
The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.
Observability-Based Guidance and Sensor Placement
NASA Astrophysics Data System (ADS)
Hinson, Brian T.
Control system performance is highly dependent on the quality of sensor information available. In a growing number of applications, however, the control task must be accomplished with limited sensing capabilities. This thesis addresses these types of problems from a control-theoretic point-of-view, leveraging system nonlinearities to improve sensing performance. Using measures of observability as an information quality metric, guidance trajectories and sensor distributions are designed to improve the quality of sensor information. An observability-based sensor placement algorithm is developed to compute optimal sensor configurations for a general nonlinear system. The algorithm utilizes a simulation of the nonlinear system as the source of input data, and convex optimization provides a scalable solution method. The sensor placement algorithm is applied to a study of gyroscopic sensing in insect wings. The sensor placement algorithm reveals information-rich areas on flexible insect wings, and a comparison to biological data suggests that insect wings are capable of acting as gyroscopic sensors. An observability-based guidance framework is developed for robotic navigation with limited inertial sensing. Guidance trajectories and algorithms are developed for range-only and bearing-only navigation that improve navigation accuracy. Simulations and experiments with an underwater vehicle demonstrate that the observability measure allows tuning of the navigation uncertainty.
Information Hiding: an Annotated Bibliography
1999-04-13
parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use
IDMA-Based MAC Protocol for Satellite Networks with Consideration on Channel Quality
2014-01-01
In order to overcome the shortcomings of existing medium access control (MAC) protocols based on TDMA or CDMA in satellite networks, interleave division multiple access (IDMA) technique is introduced into satellite communication networks. Therefore, a novel wide-band IDMA MAC protocol based on channel quality is proposed in this paper, consisting of a dynamic power allocation algorithm, a rate adaptation algorithm, and a call admission control (CAC) scheme. Firstly, the power allocation algorithm combining the technique of IDMA SINR-evolution and channel quality prediction is developed to guarantee high power efficiency even in terrible channel conditions. Secondly, the effective rate adaptation algorithm, based on accurate channel information per timeslot and by the means of rate degradation, can be realized. What is more, based on channel quality prediction, the CAC scheme, combining the new power allocation algorithm, rate scheduling, and buffering strategies together, is proposed for the emerging IDMA systems, which can support a variety of traffic types, and offering quality of service (QoS) requirements corresponding to different priority levels. Simulation results show that the new wide-band IDMA MAC protocol can make accurate estimation of available resource considering the effect of multiuser detection (MUD) and QoS requirements of multimedia traffic, leading to low outage probability as well as high overall system throughput. PMID:25126592
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Projection pursuit water quality evaluation model based on chicken swam algorithm
NASA Astrophysics Data System (ADS)
Hu, Zhe
2018-03-01
In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.
NASA Astrophysics Data System (ADS)
Nayar, Priya; Singh, Bhim; Mishra, Sukumar
2017-08-01
An artificial intelligence based control algorithm is used in solving power quality problems of a diesel engine driven synchronous generator with automatic voltage regulator and governor based standalone system. A voltage source converter integrated with a battery energy storage system is employed to mitigate the power quality problems. An adaptive neural network based signed regressor control algorithm is used for the estimation of the fundamental component of load currents for control of a standalone system with load leveling as an integral feature. The developed model of the system performs accurately under varying load conditions and provides good dynamic response to the step changes in loads. The real time performance is achieved using MATLAB along with simulink/simpower system toolboxes and results adhere to an IEEE-519 standard for power quality enhancement.
[Application of genetic algorithm in blending technology for extractions of Cortex Fraxini].
Yang, Ming; Zhou, Yinmin; Chen, Jialei; Yu, Minying; Shi, Xiufeng; Gu, Xijun
2009-10-01
To explore the feasibility of genetic algorithm (GA) on multiple objective blending technology for extractions of Cortex Fraxini. According to that the optimization objective was the combination of fingerprint similarity and the root-mean-square error of multiple key constituents, a new multiple objective optimization model of 10 batches extractions of Cortex Fraxini was built. The blending coefficient was obtained by genetic algorithm. The quality of 10 batches extractions of Cortex Fraxini that after blending was evaluated with the finger print similarity and root-mean-square error as indexes. The quality of 10 batches extractions of Cortex Fraxini that after blending was well improved. Comparing with the fingerprint of the control sample, the similarity was up, but the degree of variation is down. The relative deviation of the key constituents was less than 10%. It is proved that genetic algorithm works well on multiple objective blending technology for extractions of Cortex Fraxini. This method can be a reference to control the quality of extractions of Cortex Fraxini. Genetic algorithm in blending technology for extractions of Chinese medicines is advisable.
Job-shop scheduling applied to computer vision
NASA Astrophysics Data System (ADS)
Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David
1997-09-01
This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.
The application of immune genetic algorithm in main steam temperature of PID control of BP network
NASA Astrophysics Data System (ADS)
Li, Han; Zhen-yu, Zhang
In order to overcome the uncertainties, large delay, large inertia and nonlinear property of the main steam temperature controlled object in the power plant, a neural network intelligent PID control system based on immune genetic algorithm and BP neural network is designed. Using the immune genetic algorithm global search optimization ability and good convergence, optimize the weights of the neural network, meanwhile adjusting PID parameters using BP network. The simulation result shows that the system is superior to conventional PID control system in the control of quality and robustness.
Jakob, J; Marenda, D; Sold, M; Schlüter, M; Post, S; Kienle, P
2014-08-01
Complications after cholecystectomy are continuously documented in a nationwide database in Germany. Recent studies demonstrated a lack of reliability of these data. The aim of the study was to evaluate the impact of a control algorithm on documentation quality and the use of routine diagnosis coding as an additional validation instrument. Completeness and correctness of the documentation of complications after cholecystectomy was compared over a time interval of 12 months before and after implementation of an algorithm for faster and more accurate documentation. Furthermore, the coding of all diagnoses was screened to identify intraoperative and postoperative complications. The sensitivity of the documentation for complications improved from 46 % to 70 % (p = 0.05, specificity 98 % in both time intervals). A prolonged time interval of more than 6 weeks between patient discharge and documentation was associated with inferior data quality (incorrect documentation in 1.5 % versus 15 %, p < 0.05). The rate of case documentation within the 6 weeks after hospital discharge was clearly improved after implementation of the control algorithm. Sensitivity and specificity of screening for complications by evaluating routine diagnoses coding were 70 % and 85 %, respectively. The quality of documentation was improved by implementation of a simple memory algorithm.
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
NASA Astrophysics Data System (ADS)
Keene, Samuel T.; Cerussi, Albert E.; Warren, Robert V.; Hill, Brian; Roblyer, Darren; Leproux, AnaÑ--s.; Durkin, Amanda F.; O'Sullivan, Thomas D.; Haghany, Hosain; Mantulin, William W.; Tromberg, Bruce J.
2013-03-01
Instrument equivalence and quality control are critical elements of multi-center clinical trials. We currently have five identical Diffuse Optical Spectroscopic Imaging (DOSI) instruments enrolled in the American College of Radiology Imaging Network (ACRIN, #6691) trial located at five academic clinical research sites in the US. The goal of the study is to predict the response of breast tumors to neoadjuvant chemotherapy in 60 patients. In order to reliably compare DOSI measurements across different instruments, operators and sites, we must be confident that the data quality is comparable. We require objective and reliable methods for identifying, correcting, and rejecting low quality data. To achieve this goal, we developed and tested an automated quality control algorithm that rejects data points below the instrument noise floor, improves tissue optical property recovery, and outputs a detailed data quality report. Using a new protocol for obtaining dark-noise data, we applied the algorithm to ACRIN patient data and successfully improved the quality of recovered physiological data in some cases.
An Adaptive Buddy Check for Observational Quality Control
NASA Technical Reports Server (NTRS)
Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)
2000-01-01
An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.
Optimizing construction quality management of pavements using mechanistic performance analysis.
DOT National Transportation Integrated Search
2004-08-01
This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...
The research of automatic speed control algorithm based on Green CBTC
NASA Astrophysics Data System (ADS)
Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi
2017-06-01
Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.
Adaptive controller for a strength testbed for aircraft structures
NASA Astrophysics Data System (ADS)
Laperdin, A. I.; Yurkevich, V. D.
2017-07-01
The problem of control system design for a strength testbed of aircraft structures is considered. A method for calculating the parameters of a proportional-integral controller (control algorithm) using the time-scale separation method for the testbed taking into account the dead time effect in the control loop is presented. An adaptive control algorithm structure is proposed which limits the amplitude of high-frequency oscillations in the control system with a change in the direction of motion of the rod of the hydraulic cylinders and provides the desired accuracy and quality of transients at all stages of structural loading history. The results of tests of the developed control system with the adaptive control algorithm on an experimental strength testbed for aircraft structures are given.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Automated pharmaceutical tablet coating layer evaluation of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G.; Buchsbaum, Andreas
2015-03-01
Film coating of pharmaceutical tablets is often applied to influence the drug release behaviour. The coating characteristics such as thickness and uniformity are critical quality parameters, which need to be precisely controlled. Optical coherence tomography (OCT) shows not only high potential for off-line quality control of film-coated tablets but also for in-line monitoring of coating processes. However, an in-line quality control tool must be able to determine coating thickness measurements automatically and in real-time. This study proposes an automatic thickness evaluation algorithm for bi-convex tables, which provides about 1000 thickness measurements within 1 s. Beside the segmentation of the coating layer, optical distortions due to refraction of the beam by the air/coating interface are corrected. Moreover, during in-line monitoring the tablets might be in oblique orientation, which needs to be considered in the algorithm design. Experiments were conducted where the tablet was rotated to specified angles. Manual and automatic thickness measurements were compared for varying coating thicknesses, angles of rotations, and beam displacements (i.e. lateral displacement between successive depth scans). The automatic thickness determination algorithm provides highly accurate results up to an angle of rotation of 30°. The computation time was reduced to 0.53 s for 700 thickness measurements by introducing feasibility constraints in the algorithm.
Flow-rate control for managing communications in tracking and surveillance networks
NASA Astrophysics Data System (ADS)
Miller, Scott A.; Chong, Edwin K. P.
2007-09-01
This paper describes a primal-dual distributed algorithm for managing communications in a bandwidth-limited sensor network for tracking and surveillance. The algorithm possesses some scale-invariance properties and adaptive gains that make it more practical for applications such as tracking where the conditions change over time. A simulation study comparing this algorithm with a priority-queue-based approach in a network tracking scenario shows significant improvement in the resulting track quality when using flow control to manage communications.
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
NASA Astrophysics Data System (ADS)
Zhu, Zhongjie; Wang, Yuer; Bai, Yongqiang; Jiang, Gangyi
2010-12-01
The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D) model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS) such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
Fast-kick-off monotonically convergent algorithm for searching optimal control fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel
2011-09-15
This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less
2011-11-18
the aero- sol at the coincident time and location of the satellite SST retrievals. This informa- tion is available in the daytime for the anti-solar...are of the same form, such as probabilities or standard normal deviates. A quality control decision-making algorithm in use at the U.S. Navy oceano
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
Real-time robot deliberation by compilation and monitoring of anytime algorithms
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo
1994-01-01
Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.
NASA Astrophysics Data System (ADS)
Lukyanov, A. A.; Grigoriev, S. N.; Bobrovskij, I. N.; Melnikov, P. A.; Bobrovskij, N. M.
2017-05-01
With regard to the complexity of the new technology and increase its reliability requirements laboriousness of control operations in industrial quality control systems increases significantly. The importance of quality management control due to the fact that its promotes the correct use of production conditions, the relevant requirements are required. Digital image processing allows to reach a new technological level of production (new technological way). The most complicated automated interpretation of information is the basis for decision-making in the management of production processes. In the case of surface analysis of tools used for processing with the using of metalworking fluids (MWF) it is more complicated. The authors suggest new algorithm for optical inspection of the wear of the cylinder tool for burnishing, which used in surface plastic deformation without using of MWF. The main advantage of proposed algorithm is the possibility of automatic recognition of images of burnisher tool with the subsequent allocation of its boundaries, finding a working surface and automatically allocating the defects and wear area. Software that implements the algorithm was developed by the authors in Matlab programming environment, but can be implemented using other programming languages.
NASA Astrophysics Data System (ADS)
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
2018-02-01
In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dekker, A.G.; Hoogenboom, H.J.; Rijkeboer, M.
1997-06-01
Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air/water interface correction, and application of water quality algorithms. A prototype software environment has recently been developed that enables the user to perform and control these processing steps. Main parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code for removing atmospheric and air-water interface influences, (ii) a tool for analyzing of algorithms for estimating water quality and (iii) a spectral database, containing apparent and inherent optical properties and associated water quality parameters.more » The use of the software is illustrated by applying implemented algorithms for estimating chlorophyll to data from a spectral library of Dutch inland waters with CHL ranging from 1 to 500 pg 1{sup -1}. The algorithms currently implemented in the Toolkit software are recommended for optically simple waters, but for optically complex waters development of more advanced retrieval methods is required.« less
Enhancement of the Automated Quality Control Procedures for the International Soil Moisture Network
NASA Astrophysics Data System (ADS)
Heer, Elsa; Xaver, Angelika; Dorigo, Wouter; Messner, Romina
2017-04-01
In-situ soil moisture observations are still trusted to be the most reliable data to validate remotely sensed soil moisture products. Thus, the quality of in-situ soil moisture observations is of high importance. The International Soil Moisture Network (ISMN; http://ismn.geo.tuwien.ac.at/) provides in-situ soil moisture data from all around the world. The data is collected from individual networks and data providers, measured by different sensors in various depths. The data sets which are delivered in different units, time zones and data formats are then transformed into homogeneous data sets. An erroneous behavior of soil moisture data is very difficult to detect, due to annual and daily changes and most significantly the high influence of precipitation and snow melting processes. Only few of the network providers have a quality assessment for their data sets. Therefore, advanced quality control procedures have been developed for the ISMN (Dorigo et al. 2013). Three categories of quality checks were introduced: exceeding boundary values, geophysical consistency checks and a spectrum based approach. The spectrum based quality control algorithms aim to detect erroneous measurements which occur within plausible geophysical ranges, e.g. a sudden drop in soil moisture caused by a sensor malfunction. By defining several conditions which have to be met by the original soil moisture time series and their first and second derivative, such error types can be detected. Since the development of these sophisticated methods many more data providers shared their data with the ISMN and new types of erroneous measurements were identified. Thus, an enhancement of the automated quality control procedures became necessary. In the present work, we introduce enhancements of the existing quality control algorithms. Additionally, six completely new quality checks have been developed, e.g. detection of suspicious values before or after NAN-values, constant values and values that lie in a spectrum where a high majority of values before and after is flagged and therefore a sensor malfunction is certain. For the evaluation of the enhanced automated quality control system many test data sets were chosen, and manually validated to be compared to the existing quality control procedures and the new algorithms. Improvements will be shown that assure an appropriate assessment of the ISMN data sets, which are used for validations of soil moisture data retrieved by satellite data and are the foundation many other scientific publications.
Parameter Estimation for a Hybrid Adaptive Flight Controller
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje
2009-01-01
This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.
Stochastic control approaches for sensor management in search and exploitation
NASA Astrophysics Data System (ADS)
Hitchings, Darin Chester
Recent improvements in the capabilities of autonomous vehicles have motivated their increased use in such applications as defense, homeland security, environmental monitoring, and surveillance. To enhance performance in these applications, new algorithms are required to control teams of robots autonomously and through limited interactions with human operators. In this dissertation we develop new algorithms for control of robots performing information-seeking missions in unknown environments. These missions require robots to control their sensors in order to discover the presence of objects, keep track of the objects, and learn what these objects are, given a fixed sensing budget. Initially, we investigate control of multiple sensors, with a finite set of sensing options and finite-valued measurements, to locate and classify objects given a limited resource budget. The control problem is formulated as a Partially Observed Markov Decision Problem (POMDP), but its exact solution requires excessive computation. Under the assumption that sensor error statistics are independent and time-invariant, we develop a class of algorithms using Lagrangian Relaxation techniques to obtain optimal mixed strategies using performance bounds developed in previous research. We investigate alternative Receding Horizon (RH) controllers to convert the mixed strategies to feasible adaptive-sensing strategies and evaluate the relative performance of these controllers in simulation. The resulting controllers provide superior performance to alternative algorithms proposed in the literature and obtain solutions to large-scale POMDP problems several orders of magnitude faster than optimal Dynamic Programming (DP) approaches with comparable performance quality. We extend our results for finite action, finite measurement sensor control to scenarios with moving objects. We use Hidden Markov Models (HMMs) for the evolution of objects, according to the dynamics of a birth-death process. We develop a new lower bound on the performance of adaptive controllers in these scenarios, develop algorithms for computing solutions to this lower bound, and use these algorithms as part of a RH controller for sensor allocation in the presence of moving objects We also consider an adaptive Search problem where sensing actions are continuous and the underlying measurement space is also continuous. We extend our previous hierarchical decomposition approach based on performance bounds to this problem and develop novel implementations of Stochastic Dynamic Programming (SDP) techniques to solve this problem. Our algorithms are nearly two orders of magnitude faster than previously proposed approaches and yield solutions of comparable quality. For supervisory control, we discuss how human operators can work with and augment robotic teams performing these tasks. Our focus is on how tasks are partitioned among teams of robots and how a human operator can make intelligent decisions for task partitioning. We explore these questions through the design of a game that involves robot automata controlled by our algorithms and a human supervisor that partitions tasks based on different levels of support information. This game can be used with human subject experiments to explore the effect of information on quality of supervisory control.
NASA Astrophysics Data System (ADS)
Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald
2017-04-01
West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the national meteorological services.
Random Forest Application for NEXRAD Radar Data Quality Control
NASA Astrophysics Data System (ADS)
Keem, M.; Seo, B. C.; Krajewski, W. F.
2017-12-01
Identification and elimination of non-meteorological radar echoes (e.g., returns from ground, wind turbines, and biological targets) are the basic data quality control steps before radar data use in quantitative applications (e.g., precipitation estimation). Although WSR-88Ds' recent upgrade to dual-polarization has enhanced this quality control and echo classification, there are still challenges to detect some non-meteorological echoes that show precipitation-like characteristics (e.g., wind turbine or anomalous propagation clutter embedded in rain). With this in mind, a new quality control method using Random Forest is proposed in this study. This classification algorithm is known to produce reliable results with less uncertainty. The method introduces randomness into sampling and feature selections and integrates consequent multiple decision trees. The multidimensional structure of the trees can characterize the statistical interactions of involved multiple features in complex situations. The authors explore the performance of Random Forest method for NEXRAD radar data quality control. Training datasets are selected using several clear cases of precipitation and non-precipitation (but with some non-meteorological echoes). The model is structured using available candidate features (from the NEXRAD data) such as horizontal reflectivity, differential reflectivity, differential phase shift, copolar correlation coefficient, and their horizontal textures (e.g., local standard deviation). The influence of each feature on classification results are quantified by variable importance measures that are automatically estimated by the Random Forest algorithm. Therefore, the number and types of features in the final forest can be examined based on the classification accuracy. The authors demonstrate the capability of the proposed approach using several cases ranging from distinct to complex rain/no-rain events and compare the performance with the existing algorithms (e.g., MRMS). They also discuss operational feasibility based on the observed strength and weakness of the method.
Honing process optimization algorithms
NASA Astrophysics Data System (ADS)
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Statistical Quality Control of Moisture Data in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D. P.; Rukhovets, L.; Todling, R.
1999-01-01
A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.
Nonlinear Multiscale Transformations: From Synchronization to Error Control
2001-07-01
transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an
Development of flying qualities criteria for single pilot instrument flight operations
NASA Technical Reports Server (NTRS)
Bar-Gill, A.; Nixon, W. B.; Miller, G. E.
1982-01-01
Flying qualities criteria for Single Pilot Instrument Flight Rule (SPIFR) operations were investigated. The ARA aircraft was modified and adapted for SPIFR operations. Aircraft configurations to be flight-tested were chosen and matched on the ARA in-flight simulator, implementing modern control theory algorithms. Mission planning and experimental matrix design were completed. Microprocessor software for the onboard data acquisition system was debugged and flight-tested. Flight-path reconstruction procedure and the associated FORTRAN program were developed. Algorithms associated with the statistical analysis of flight test results and the SPIFR flying qualities criteria deduction are discussed.
Adaptive mechanism-based congestion control for networked systems
NASA Astrophysics Data System (ADS)
Liu, Zhi; Zhang, Yun; Chen, C. L. Philip
2013-03-01
In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.
Real-time control of combined surface water quantity and quality: polder flushing.
Xu, M; van Overloop, P J; van de Giesen, N C; Stelling, G S
2010-01-01
In open water systems, keeping both water depths and water quality at specified values is critical for maintaining a 'healthy' water system. Many systems still require manual operation, at least for water quality management. When applying real-time control, both quantity and quality standards need to be met. In this paper, an artificial polder flushing case is studied. Model Predictive Control (MPC) is developed to control the system. In addition to MPC, a 'forward estimation' procedure is used to acquire water quality predictions for the simplified model used in MPC optimization. In order to illustrate the advantages of MPC, classical control [Proportional-Integral control (PI)] has been developed for comparison in the test case. The results show that both algorithms are able to control the polder flushing process, but MPC is more efficient in functionality and control flexibility.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
2012-01-01
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
Print quality analysis for ink-saving algorithms
NASA Astrophysics Data System (ADS)
Ortiz Segovia, Maria V.; Bonnier, Nicolas; Allebach, Jan P.
2012-01-01
Ink-saving strategies for CMYK printers have evolved from their earlier stages where the 'draft' print mode was the main option available to control ink usage. The savings were achieved by printing alternate dots in an image at the expense of reducing print quality considerably. Nowadays, customers are not only unwilling to compromise quality but have higher expectations regarding both visual print quality and ink reduction solutions. Therefore, the need for more intricate ink-saving solutions with lower impact on print quality is evident. Printing-related factors such as the way the printer places the dots on the paper and the ink-substrate interaction play important and complex roles in the characterization and modeling of the printing process that make the ink reduction topic a challenging problem. In our study, we are interested in benchmarking ink-saving algorithms to find the connections between different ink reduction levels of a given ink-saving method and a set of print quality attributes. This study is mostly related to CMYK printers that use dispersed dot halftoning algorithms. The results of our efforts to develop such an evaluation scheme are presented in this paper.
Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry
NASA Astrophysics Data System (ADS)
Serafini, S.; Paone, N.; Castellini, P.
2013-12-01
A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.
Optimal Power Control in Wireless Powered Sensor Networks: A Dynamic Game-Based Approach
Xu, Haitao; Guo, Chao; Zhang, Long
2017-01-01
In wireless powered sensor networks (WPSN), it is essential to research uplink transmit power control in order to achieve throughput performance balancing and energy scheduling. Each sensor should have an optimal transmit power level for revenue maximization. In this paper, we discuss a dynamic game-based algorithm for optimal power control in WPSN. The main idea is to use the non-cooperative differential game to control the uplink transmit power of wireless sensors in WPSN, to extend their working hours and to meet QoS (Quality of Services) requirements. Subsequently, the Nash equilibrium solutions are obtained through Bellman dynamic programming. At the same time, an uplink power control algorithm is proposed in a distributed manner. Through numerical simulations, we demonstrate that our algorithm can obtain optimal power control and reach convergence for an infinite horizon. PMID:28282945
NASA Astrophysics Data System (ADS)
Ek, M. B.; Xia, Y.; Ford, T.; Wu, Y.; Quiring, S. M.
2015-12-01
The North American Soil Moisture Database (NASMD) was initiated in 2011 to provide support for developing climate forecasting tools, calibrating land surface models and validating satellite-derived soil moisture algorithms. The NASMD has collected data from over 30 soil moisture observation networks providing millions of in situ soil moisture observations in all 50 states as well as Canada and Mexico. It is recognized that the quality of measured soil moisture in NASMD is highly variable due to the diversity of climatological conditions, land cover, soil texture, and topographies of the stations and differences in measurement devices (e.g., sensors) and installation. It is also recognized that error, inaccuracy and imprecision in the data set can have significant impacts on practical operations and scientific studies. Therefore, developing an appropriate quality control procedure is essential to ensure the data is of the best quality. In this study, an automated quality control approach is developed using the North American Land Data Assimilation System phase 2 (NLDAS-2) Noah soil porosity, soil temperature, and fraction of liquid and total soil moisture to flag erroneous and/or spurious measurements. Overall results show that this approach is able to flag unreasonable values when the soil is partially frozen. A validation example using NLDAS-2 multiple model soil moisture products at the 20 cm soil layer showed that the quality control procedure had a significant positive impact in Alabama, North Carolina, and West Texas. It had a greater impact in colder regions, particularly during spring and autumn. Over 433 NASMD stations have been quality controlled using the methodology proposed in this study, and the algorithm will be implemented to control data quality from the other ~1,200 NASMD stations in the near future.
Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.
NASA Astrophysics Data System (ADS)
Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai
2016-09-01
The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.
A MPPT Algorithm Based PV System Connected to Single Phase Voltage Controlled Grid
NASA Astrophysics Data System (ADS)
Sreekanth, G.; Narender Reddy, N.; Durga Prasad, A.; Nagendrababu, V.
2012-10-01
Future ancillary services provided by photovoltaic (PV) systems could facilitate their penetration in power systems. In addition, low-power PV systems can be designed to improve the power quality. This paper presents a single-phase PV systemthat provides grid voltage support and compensation of harmonic distortion at the point of common coupling thanks to a repetitive controller. The power provided by the PV panels is controlled by a Maximum Power Point Tracking algorithm based on the incremental conductance method specifically modified to control the phase of the PV inverter voltage. Simulation and experimental results validate the presented solution.
NASA Astrophysics Data System (ADS)
Qiang, Jiang; Meng-wei, Liao; Ming-jie, Luo
2018-03-01
Abstract.The control performance of Permanent Magnet Synchronous Motor will be affected by the fluctuation or changes of mechanical parameters when PMSM is applied as driving motor in actual electric vehicle,and external disturbance would influence control robustness.To improve control dynamic quality and robustness of PMSM speed control system, a new second order integral sliding mode control algorithm is introduced into PMSM vector control.The simulation results show that, compared with the traditional PID control,the modified control scheme optimized has better control precision and dynamic response ability and perform better with a stronger robustness facing external disturbance,it can effectively solve the traditional sliding mode variable structure control chattering problems as well.
SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel
NASA Astrophysics Data System (ADS)
Nolle, Lars
The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Flight Evaluation of an Aircraft with Side and Center Stick Controllers and Rate-Limited Ailerons
NASA Technical Reports Server (NTRS)
Deppe, P. R.; Chalk, C. R.; Shafer, M. F.
1996-01-01
As part of an ongoing government and industry effort to study the flying qualities of aircraft with rate-limited control surface actuators, two studies were previously flown to examine an algorithm developed to reduce the tendency for pilot-induced oscillation when rate limiting occurs. This algorithm, when working properly, greatly improved the performance of the aircraft in the first study. In the second study, however, the algorithm did not initially offer as much improvement. The differences between the two studies caused concern. The study detailed in this paper was performed to determine whether the performance of the algorithm was affected by the characteristics of the cockpit controllers. Time delay and flight control system noise were also briefly evaluated. An in-flight simulator, the Calspan Learjet 25, was programmed with a low roll actuator rate limit, and the algorithm was programmed into the flight control system. Side- and center-stick controllers, force and position command signals, a rate-limited feel system, a low-frequency feel system, and a feel system damper were evaluated. The flight program consisted of four flights and 38 evaluations of test configurations. Performance of the algorithm was determined to be unaffected by using side- or center-stick controllers or force or position command signals. The rate-limited feel system performed as well as the rate-limiting algorithm but was disliked by the pilots. The low-frequency feel system and the feel system damper were ineffective. Time delay and noise were determined to degrade the performance of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Xu, H; Chen, S
Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagationmore » was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.« less
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
NASA Astrophysics Data System (ADS)
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R
2014-01-01
Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770
Report of the international workshop on quality control of monthly climate data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-12-31
The National Climatic Data Center (NCDC), the US Department of Energy`s Carbon Dioxide Information Analysis Center, and the World Meteorological Organization (WMO) cosponsored an international quality control workshop for monthly climate data, October 5--6, 1993, at NCDC. About 40 scientists from around the world participated. The purpose of the meeting was to discuss and compare various quality control methods and to draft recommendations concerning the most successful systems. The near-term goal to improve quality control of CLIMAT messages for the NCDC/WMO publication Monthly Climatic Data for the World was sucessfully met. An electronic bulletin board was established to post errorsmore » and corrections. Improved communications among Global Telecommunication System hubs will be implemented. Advanced quality control algorithms were discussed and improvements were suggested. Further data exchanges were arranged.« less
Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.
Computer graphics for quality control in the INAA of geological samples
Grossman, J.N.; Baedecker, P.A.
1987-01-01
A data reduction system for the routine instrumental activation analysis of samples is described, with particular emphasis on interactive graphics capabilities for evaluating analytical quality. Graphics procedures have been developed to interactively control the analysis of selected photopeaks during spectral analysis, and to evaluate detector performance during a given counting cycle. Graphics algorithms are also used to compare the data on reference samples with accepted values, to prepare quality control charts to evaluate long term precision and to search for systematic variations in data on reference samples as a function of time. ?? 1987 Akade??miai Kiado??.
Comparison and Evaluation of Clustering Algorithms for Tandem Mass Spectra.
Rieder, Vera; Schork, Karin U; Kerschke, Laura; Blank-Landeshammer, Bernhard; Sickmann, Albert; Rahnenführer, Jörg
2017-11-03
In proteomics, liquid chromatography-tandem mass spectrometry (LC-MS/MS) is established for identifying peptides and proteins. Duplicated spectra, that is, multiple spectra of the same peptide, occur both in single MS/MS runs and in large spectral libraries. Clustering tandem mass spectra is used to find consensus spectra, with manifold applications. First, it speeds up database searches, as performed for instance by Mascot. Second, it helps to identify novel peptides across species. Third, it is used for quality control to detect wrongly annotated spectra. We compare different clustering algorithms based on the cosine distance between spectra. CAST, MS-Cluster, and PRIDE Cluster are popular algorithms to cluster tandem mass spectra. We add well-known algorithms for large data sets, hierarchical clustering, DBSCAN, and connected components of a graph, as well as the new method N-Cluster. All algorithms are evaluated on real data with varied parameter settings. Cluster results are compared with each other and with peptide annotations based on validation measures such as purity. Quality control, regarding the detection of wrongly (un)annotated spectra, is discussed for exemplary resulting clusters. N-Cluster proves to be highly competitive. All clustering results benefit from the so-called DISMS2 filter that integrates additional information, for example, on precursor mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Zhang, H; Zhang, B
2015-06-15
Purpose: To clinically evaluate the differences in volumetric modulated arc therapy (VMAT) treatment plan and delivery between two commercial treatment planning systems. Methods: Two commercial VMAT treatment planning systems with different VMAT optimization algorithms and delivery approaches were evaluated. This study included 16 clinical VMAT plans performed with the first system: 2 spine, 4 head and neck (HN), 2 brain, 4 pancreas, and 4 pelvis plans. These 16 plans were then re-optimized with the same number of arcs using the second treatment planning system. Planning goals were invariant between the two systems. Gantry speed, dose rate modulation, MLC modulation, planmore » quality, number of monitor units (MUs), VMAT quality assurance (QA) results, and treatment delivery time were compared between the 2 systems. VMAT QA results were performed using Mapcheck2 and analyzed with gamma analysis (3mm/3% and 2mm/2%). Results: Similar plan quality was achieved with each VMAT optimization algorithm, and the difference in delivery time was minimal. Algorithm 1 achieved planning goals by highly modulating the MLC (total distance traveled by leaves (TL) = 193 cm average over control points per plan), while maintaining a relatively constant dose rate (dose-rate change <100 MU/min). Algorithm 2 involved less MLC modulation (TL = 143 cm per plan), but greater dose-rate modulation (range = 0-600 MU/min). The average number of MUs was 20% less for algorithm 2 (ratio of MUs for algorithms 2 and 1 ranged from 0.5-1). VMAT QA results were similar for all disease sites except HN plans. For HN plans, the average gamma passing rates were 88.5% (2mm/2%) and 96.9% (3mm/3%) for algorithm 1 and 97.9% (2mm/2%) and 99.6% (3mm/3%) for algorithm 2. Conclusion: Both VMAT optimization algorithms achieved comparable plan quality; however, fewer MUs were needed and QA results were more robust for Algorithm 2, which more highly modulated dose rate.« less
NASA Astrophysics Data System (ADS)
Boughari, Yamina
New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.
Performance evaluation of power control algorithms in wireless cellular networks
NASA Astrophysics Data System (ADS)
Temaneh-Nyah, C.; Iita, V.
2014-10-01
Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.
Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A
2011-07-01
We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.
Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset
NASA Astrophysics Data System (ADS)
Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.
2011-12-01
GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the implementation of automatic quality control procedures. The scientific validity of the ESA dataset will be additionally illustrated with examples of applications that can be supported, such as ozone-hole monitoring, volcanic ash detection and analysis of atmospheric composition changes during the past years.
NASA Astrophysics Data System (ADS)
Babakhanova, Kh A.; Varepo, L. G.; Nagornova, I. V.; Babluyk, E. B.; Kondratov, A. P.
2018-04-01
Paper is one of the printing system key components causing the high-quality printed products output. Providing the printing companies with the specified printing properties paper, while simultaneously increasing the paper products range and volume by means of the forecasting methods application and evaluation during the production process, is certainly a relevant problem. The paper presents the printing quality control algorithm taking into consideration the paper printing properties quality assessment depending on the manufacture technological features and composition variation. The information system including raw material and paper properties data and making possible pulp and paper enterprises to select paper composition optimal formulation is proposed taking into account the printing process procedure peculiarities of the paper manufacturing with specified printing properties.
Ruan, Jujun; Zhang, Chao; Li, Ya; Li, Peiyi; Yang, Zaizhi; Chen, Xiaohong; Huang, Mingzhi; Zhang, Tao
2017-02-01
This work proposes an on-line hybrid intelligent control system based on a genetic algorithm (GA) evolving fuzzy wavelet neural network software sensor to control dissolved oxygen (DO) in an anaerobic/anoxic/oxic process for treating papermaking wastewater. With the self-learning and memory abilities of neural network, handling the uncertainty capacity of fuzzy logic, analyzing local detail superiority of wavelet transform and global search of GA, this proposed control system can extract the dynamic behavior and complex interrelationships between various operation variables. The results indicate that the reasonable forecasting and control performances were achieved with optimal DO, and the effluent quality was stable at and below the desired values in real time. Our proposed hybrid approach proved to be a robust and effective DO control tool, attaining not only adequate effluent quality but also minimizing the demand for energy, and is easily integrated into a global monitoring system for purposes of cost management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Determination of an Optimal Control Strategy for a Generic Surface Vehicle
2014-06-18
paragraphs uses the numerical procedure in MATLAB’s BVP (bvp4c) algorithm using the continuation method. The goal is to find a solution to the set of...solution. Solving the BVP problem using bvp4c requires an initial guess for the solution. Note that the algorithm is very sensitive to the particular...form of the initial guess. The quality of the initial guess is paramount in convergence speed of the BVP algorithm and often determines if the
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
[The comprehensive approach to ensure the quality of forensic medical examination of a cadaver].
Mel'nikov, O V; Mal'tsev, A E; Petrov, S B; Petrov, B A
2015-01-01
The objective of the present work was to estimate the effectiveness of the comprehensive monitoring system designed to enhance the quality of forensic medical expertise for determining the cause of death in the hanging cases. It was shown that the practical application of the algorithmization and automated quality control system improves the effectiveness of forensic medical examination of the cadavers in the hanging cases. The system performs the control, directing, and teaching functions. Moreover, it allows to estimate the completeness of the examination of the cadaver.
Optimization of hydraulic turbine governor parameters based on WPA
NASA Astrophysics Data System (ADS)
Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao
2018-01-01
The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
NASA Astrophysics Data System (ADS)
Smits, K. M.; Drumheller, Z. W.; Lee, J. H.; Illangasekare, T. H.; Regnery, J.; Kitanidis, P. K.
2015-12-01
Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization lead to reduced natural recharge rates and overuse. Scientists and engineers have begun to revisit the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. This research seeks to develop and validate a general simulation-based control optimization algorithm that relies on real-time data collected though embedded sensors that can be used to ease the operational challenges of MAR facilities. Experiments to validate the control algorithm were conducted at the laboratory scale in a two-dimensional synthetic aquifer under both homogeneous and heterogeneous packing configurations. The synthetic aquifer used well characterized technical sands and the electrical conductivity signal of an inorganic conservative tracer as a surrogate measure for water quality. The synthetic aquifer was outfitted with an array of sensors and an autonomous pumping system. Experimental results verified the feasibility of the approach and suggested that the system can improve the operation of MAR facilities. The dynamic parameter inversion reduced the average error between the simulated and observed pressures between 12.5 and 71.4%. The control optimization algorithm ran smoothly and generated optimal control decisions. Overall, results suggest that with some improvements to the inversion and interpolation algorithms, which can be further advanced through testing with laboratory experiments using sensors, the concept can successfully improve the operation of MAR facilities.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky
2009-01-01
This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.
NASA Astrophysics Data System (ADS)
Zalogin, Stanislav M.; Zalogin, M. S.
1997-02-01
The problem for construction of control algorithm in OEST the information track of the optical record carrier the realization of which is based on the use of accelerations is considered. Such control algorithms render the designed system the properties of adaptability, feeble sensitivity to the system parameter change and the action of disturbing forces what gives known advantages to information carriers with such system under operation in hard climate conditions as well as at maladjustment, workpieces wear and change of friction in the system. In the paper are investigated dynamic characteristics of a closed OEST, it is shown, that the designed stable system with given quality indices is a high-precision one. The validated recommendations as to design of control algorithms parameters are confirmed by results of mathematical simulation of controlled processes. The proposed methods for OEST synthesis on the basis of the control acceleration principle can be recommended for the use at industrial production of optical information record carriers.
[GIS and scenario analysis aid to water pollution control planning of river basin].
Wang, Shao-ping; Cheng, Sheng-tong; Jia, Hai-feng; Ou, Zhi-dan; Tan, Bin
2004-07-01
The forward and backward algorithms for watershed water pollution control planning were summarized in this paper as well as their advantages and shortages. The spatial databases of water environmental function region, pollution sources, monitoring sections and sewer outlets were built with ARCGIS8.1 as the platform in the case study of Ganjiang valley, Jiangxi province. Based on the principles of the forward algorithm, four scenarios were designed for the watershed pollution control. Under these scenarios, ten sets of planning schemes were generated to implement cascade pollution source control. The investment costs of sewage treatment for these schemes were estimated by means of a series of cost-effective functions; with pollution source prediction, the water quality was modeled with CSTR model for each planning scheme. The modeled results of different planning schemes were visualized through GIS to aid decision-making. With the results of investment cost and water quality attainment as decision-making accords and based on the analysis of the economic endurable capacity for water pollution control in Ganjiang river basin, two optimized schemes were proposed. The research shows that GIS technology and scenario analysis can provide a good guidance to the synthesis, integrity and sustainability aspects for river basin water quality planning.
Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
SUsskind, Joel
2008-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 pm C02 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 pm C02 observations are now used primarily in the generation of cloud cleared radiances Ri. This approach allows for the generation of accurate values of Ri and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by- channel error estimates for Ri. These error estimates are used for quality control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of quality control using the NASA GEOS-5 data assimilation system. Assimilation of quality controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done Operationally by ECMWF and NCEP. Forecasts resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.
Liu, Zhao; Zheng, Chaorong; Wu, Yue
2017-09-01
Wind profilers have been widely adopted to observe the wind field information in the atmosphere for different purposes. But accuracy of its observation has limitations due to various noises or disturbances and hence need to be further improved. In this paper, the data measured under strong wind conditions, using a 1290-MHz boundary layer profiler (BLP), are quality controlled via a composite quality control (QC) procedure proposed by the authors. Then, through the comparison with the data measured by radiosonde flights (balloon observations), the critical thresholds in the composite QC procedure, including consensus average threshold T 1 and vertical shear threshold T 3 , are systematically discussed. And the performance of the BLP operated under precipitation is also evaluated. It is found that to ensure the high accuracy and high data collectable rate, the optimal range of subsets is determined to be 4 m/s. Although the number of data rejected by the combined algorithm of vertical shear examination and small median test is quite limited, it is proved that the algorithm is quite useful to recognize the outlier with a large discrepancy. And the optimal wind shear threshold T 3 can be recommended as 5 ms -1 /100m. During patchy precipitation, the quality of data measured by the four oblique beams (using the DBS measuring technique) can still be ensured. After the BLP data are quality controlled by the composite QC procedure, the output can show good agreement with the balloon observation.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
New Features of the Collection 4 MODIS LAI and FPAR Product
NASA Astrophysics Data System (ADS)
Bin, T.; Yang, W.; Dong, H.; Shabanov, N.; Knyazikhin, Y.; Myneni, R.
2003-12-01
An algorithm based on physics of radiative transfer in vegetation canopies for the retrieval of vegetation green leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FPAR) from MODIS surface reflectance data was developed, prototyped and is in operational production at NASA computing facilities since June 2000. This poster highlights recent changes in the operational MODIS LAI and FPAR algorithm introduced for collection 4 data reprocessing. The changes to the algorithm are targeted to improve agreement of retrieved LAI and FPAR with corresponding field measurements, improve consistency of Quality Control (QC) definitions and miscellaneous bug fixes as summarized below. * Improvement of LUTs for the main and back-up algorithms for biomes 1 and 3. Benefits: a) increase in quality of retrievals; b) non-physical peaks in the global LAI distribution have been removed; c) improved agreement with field measurements * Improved QA scheme. Benefits: a) consistency between MODLAND and SCF quality flags has been achieved; b)ambiguity in QA has been resolved * New 8-day compositing scheme. Benefits: a) compositing over best quality retrievals, instead of all retrievals; b) lowers LAI values, decreases saturation and number of pixels generated by the back-up * At-launch static IGBP land cover, input to the LAI/FPAR algorithm, was replaced with the MODIS land cover map. Benefits: a) crosswalking of 17 classes IGBP scheme to 6-biome LC has been eliminated; b) uncertainties in the MODIS LAI/FPAR product due to uncertainties in land cover map have been reduced
Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection
NASA Astrophysics Data System (ADS)
Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan
2018-03-01
X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400 × 400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.
A new normalizing algorithm for BAC CGH arrays with quality control metrics.
Miecznikowski, Jeffrey C; Gaile, Daniel P; Liu, Song; Shepherd, Lori; Nowak, Norma
2011-01-01
The main focus in pin-tip (or print-tip) microarray analysis is determining which probes, genes, or oligonucleotides are differentially expressed. Specifically in array comparative genomic hybridization (aCGH) experiments, researchers search for chromosomal imbalances in the genome. To model this data, scientists apply statistical methods to the structure of the experiment and assume that the data consist of the signal plus random noise. In this paper we propose "SmoothArray", a new method to preprocess comparative genomic hybridization (CGH) bacterial artificial chromosome (BAC) arrays and we show the effects on a cancer dataset. As part of our R software package "aCGHplus," this freely available algorithm removes the variation due to the intensity effects, pin/print-tip, the spatial location on the microarray chip, and the relative location from the well plate. removal of this variation improves the downstream analysis and subsequent inferences made on the data. Further, we present measures to evaluate the quality of the dataset according to the arrayer pins, 384-well plates, plate rows, and plate columns. We compare our method against competing methods using several metrics to measure the biological signal. With this novel normalization algorithm and quality control measures, the user can improve their inferences on datasets and pinpoint problems that may arise in their BAC aCGH technology.
[A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].
Zhao, An; Wu, Baoming
2006-12-01
This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.
3D measurement by digital photogrammetry
NASA Astrophysics Data System (ADS)
Schneider, Carl T.
1993-12-01
Photogrammetry is well known in geodetic surveys as aerial photogrammetry or close range applications as architectural photogrammetry. The photogrammetric methods and algorithms combined with digital cameras and digital image processing methods are now introduced for industrial applications as automation and quality control. The presented paper will describe the photogrammetric and digital image processing algorithms and the calibration methods. These algorithms and methods were demonstrated with application examples. These applications are a digital photogrammetric workstation as a mobil multi purpose 3D measuring tool and a tube measuring system as an example for a single purpose tool.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.
2013-01-01
New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
Grid Integration of Single Stage Solar PV System using Three-level Voltage Source Converter
NASA Astrophysics Data System (ADS)
Hussain, Ikhlaq; Kandpal, Maulik; Singh, Bhim
2016-08-01
This paper presents a single stage solar PV (photovoltaic) grid integrated power generating system using a three level voltage source converter (VSC) operating at low switching frequency of 900 Hz with robust synchronizing phase locked loop (RS-PLL) based control algorithm. To track the maximum power from solar PV array, an incremental conductance algorithm is used and this maximum power is fed to the grid via three-level VSC. The use of single stage system with three level VSC offers the advantage of low switching losses and the operation at high voltages and high power which results in enhancement of power quality in the proposed system. Simulated results validate the design and control algorithm under steady state and dynamic conditions.
A model for rotorcraft flying qualities studies
NASA Technical Reports Server (NTRS)
Mittal, Manoj; Costello, Mark F.
1993-01-01
This paper outlines the development of a mathematical model that is expected to be useful for rotorcraft flying qualities research. A computer model is presented that can be applied to a range of different rotorcraft configurations. The algorithm computes vehicle trim and a linear state-space model of the aircraft. The trim algorithm uses non linear optimization theory to solve the nonlinear algebraic trim equations. The linear aircraft equations consist of an airframe model and a flight control system dynamic model. The airframe model includes coupled rotor and fuselage rigid body dynamics and aerodynamics. The aerodynamic model for the rotors utilizes blade element theory and a three state dynamic inflow model. Aerodynamics of the fuselage and fuselage empennages are included. The linear state-space description for the flight control system is developed using standard block diagram data.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
An In-Process Surface Roughness Recognition System in End Milling Operations
ERIC Educational Resources Information Center
Yang, Lieh-Dai; Chen, Joseph C.
2004-01-01
To develop an in-process quality control system, a sensor technique and a decision-making algorithm need to be applied during machining operations. Several sensor techniques have been used in the in-process prediction of quality characteristics in machining operations. For example, an accelerometer sensor can be used to monitor the vibration of…
Multisensor benchmark data for riot control
NASA Astrophysics Data System (ADS)
Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter
2008-10-01
Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kegel, T.M.
Calibration laboratories are faced with the need to become accredited or registered to one or more quality standards. One requirement common to all of these standards is the need to have in place a measurement assurance program. What is a measurement assurance program? Brian Belanger, in Measurement Assurance Programs: Part 1, describes it as a {open_quotes}quality assurance program for a measurement process that quantifies the total uncertainty of the measurements (both random and systematic components of error) with respect to national or designated standards and demonstrates that the total uncertainty is sufficiently small to meet the user`s requirements.{close_quotes} Rolf Schumachermore » is more specific in Measurement Assurance in Your Own Laboratory. He states, {open_quotes}Measurement assurance is the application of broad quality control principles to measurements of calibrations.{close_quotes} Here, the focus is on one important part of any measurement assurance program: implementation of statistical process control (SPC). Paraphrasing Juran`s Quality Control Handbook, a process is in statistical control if the only observed variations are those that can be attributed to random causes. Conversely, a process that exhibits variations due to assignable causes is not in a state of statistical control. Finally, Carrol Croarkin states, {open_quotes}In the measurement assurance context the measurement algorithm including instrumentation, reference standards and operator interactions is the process that is to be controlled, and its direct product is the measurement per se. The measurements are assumed to be valid if the measurement algorithm is operating in a state of control.{close_quotes} Implicit in this statement is the important fact that an out-of-control process cannot produce valid measurements. 7 figs.« less
A Machine Vision Quality Control System for Industrial Acrylic Fibre Production
NASA Astrophysics Data System (ADS)
Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João
2002-12-01
This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.
Modal control theory and application to aircraft lateral handling qualities design
NASA Technical Reports Server (NTRS)
Srinathkumar, S.
1978-01-01
A multivariable synthesis procedure based on eigenvalue/eigenvector assignment is reviewed and is employed to develop a systematic design procedure to meet the lateral handling qualities design objectives of a fighter aircraft over a wide range of flight conditions. The closed loop modal characterization developed provides significant insight into the design process and plays a pivotal role in the synthesis of robust feedback systems. The simplicity of the synthesis algorithm yields an efficient computer aided interactive design tool for flight control system synthesis.
NASA Astrophysics Data System (ADS)
Krishnaswami, Venkataraman; De Luca, Giulia M. R.; Breedijk, Ronald M. P.; Van Noorden, Cornelis J. F.; Manders, Erik M. M.; Hoebe, Ron A.
2017-02-01
Fluorescence microscopy is an important tool in biomedical imaging. An inherent trade-off lies between image quality and photodamage. Recently, we have introduced rescan confocal microscopy (RCM) that improves the lateral resolution of a confocal microscope down to 170 nm. Previously, we have demonstrated that with controlled-light exposure microscopy, spatial control of illumination reduces photodamage without compromising image quality. Here, we show that the combination of these two techniques leads to high resolution imaging with reduced photodamage without compromising image quality. Implementation of spatially-controlled illumination was carried out in RCM using a line scanning-based approach. Illumination is spatially-controlled for every line during imaging with the help of a prediction algorithm that estimates the spatial profile of the fluorescent specimen. The estimation is based on the information available from previously acquired line images. As a proof-of-principle, we show images of N1E-115 neuroblastoma cells, obtained by this new setup with reduced illumination dose, improved resolution and without compromising image quality.
SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Bri-Mathias; Palmintier, Bryan
This presentation provides an overview of full-scale, high-quality, synthetic distribution system data set(s) for testing distribution automation algorithms, distributed control approaches, ADMS capabilities, and other emerging distribution technologies.
Adaptive Neural Network Algorithm for Power Control in Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Masri Husam Fayiz, Al
2017-01-01
The aim of this paper is to design, test and evaluate a prototype of an adaptive neural network algorithm for the power controlling system of a nuclear power plant. The task of power control in nuclear reactors is one of the fundamental tasks in this field. Therefore, researches are constantly conducted to ameliorate the power reactor control process. Currently, in the Department of Automation in the National Research Nuclear University (NRNU) MEPhI, numerous studies are utilizing various methodologies of artificial intelligence (expert systems, neural networks, fuzzy systems and genetic algorithms) to enhance the performance, safety, efficiency and reliability of nuclear power plants. In particular, a study of an adaptive artificial intelligent power regulator in the control systems of nuclear power reactors is being undertaken to enhance performance and to minimize the output error of the Automatic Power Controller (APC) on the grounds of a multifunctional computer analyzer (simulator) of the Water-Water Energetic Reactor known as Vodo-Vodyanoi Energetichesky Reaktor (VVER) in Russian. In this paper, a block diagram of an adaptive reactor power controller was built on the basis of an intelligent control algorithm. When implementing intelligent neural network principles, it is possible to improve the quality and dynamic of any control system in accordance with the principles of adaptive control. It is common knowledge that an adaptive control system permits adjusting the controller’s parameters according to the transitions in the characteristics of the control object or external disturbances. In this project, it is demonstrated that the propitious options for an automatic power controller in nuclear power plants is a control system constructed on intelligent neural network algorithms.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
Integrated identification, modeling and control with applications
NASA Astrophysics Data System (ADS)
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-11-12
The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.
Extracting DNA words based on the sequence features: non-uniform distribution and integrity.
Li, Zhi; Cao, Hongyan; Cui, Yuehua; Zhang, Yanbo
2016-01-25
DNA sequence can be viewed as an unknown language with words as its functional units. Given that most sequence alignment algorithms such as the motif discovery algorithms depend on the quality of background information about sequences, it is necessary to develop an ab initio algorithm for extracting the "words" based only on the DNA sequences. We considered that non-uniform distribution and integrity were two important features of a word, based on which we developed an ab initio algorithm to extract "DNA words" that have potential functional meaning. A Kolmogorov-Smirnov test was used for consistency test of uniform distribution of DNA sequences, and the integrity was judged by the sequence and position alignment. Two random base sequences were adopted as negative control, and an English book was used as positive control to verify our algorithm. We applied our algorithm to the genomes of Saccharomyces cerevisiae and 10 strains of Escherichia coli to show the utility of the methods. The results provide strong evidences that the algorithm is a promising tool for ab initio building a DNA dictionary. Our method provides a fast way for large scale screening of important DNA elements and offers potential insights into the understanding of a genome.
Improving Forecast Skill by Assimilation of Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel; Reale, Oreste
2009-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of cloud cleared radiances R(sub i). This approach allows for the generation of accurate values of R(sub i) and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel error estimates for R(sub i). These error estimates are used for Quality Control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of Quality Control using the NASA GEOS-5 data assimilation system. Assimilation of Quality Controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done operationally by ECMWF and NCEP. Forecast resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.
Diverse Planning for UAV Control and Remote Sensing
Tožička, Jan; Komenda, Antonín
2016-01-01
Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs. PMID:28009831
Towards Internet QoS provisioning based on generic distributed QoS adaptive routing engine.
Haikal, Amira Y; Badawy, M; Ali, Hesham A
2014-01-01
Increasing efficiency and quality demands of modern Internet technologies drive today's network engineers to seek to provide quality of service (QoS). Internet QoS provisioning gives rise to several challenging issues. This paper introduces a generic distributed QoS adaptive routing engine (DQARE) architecture based on OSPFxQoS. The innovation of the proposed work in this paper is its undependability on the used QoS architectures and, moreover, splitting of the control strategy from data forwarding mechanisms, so we guarantee a set of absolute stable mechanisms on top of which Internet QoS can be built. DQARE architecture is furnished with three relevant traffic control schemes, namely, service differentiation, QoS routing, and traffic engineering. The main objective of this paper is to (i) provide a general configuration guideline for service differentiation, (ii) formalize the theoretical properties of different QoS routing algorithms and then introduce a QoS routing algorithm (QOPRA) based on dynamic programming technique, and (iii) propose QoS multipath forwarding (QMPF) model for paths diversity exploitation. NS2-based simulations proved the DQARE superiority in terms of delay, packet delivery ratio, throughput, and control overhead. Moreover, extensive simulations are used to compare the proposed QOPRA algorithm and QMPF model with their counterparts in the literature.
Towards Internet QoS Provisioning Based on Generic Distributed QoS Adaptive Routing Engine
Haikal, Amira Y.; Badawy, M.; Ali, Hesham A.
2014-01-01
Increasing efficiency and quality demands of modern Internet technologies drive today's network engineers to seek to provide quality of service (QoS). Internet QoS provisioning gives rise to several challenging issues. This paper introduces a generic distributed QoS adaptive routing engine (DQARE) architecture based on OSPFxQoS. The innovation of the proposed work in this paper is its undependability on the used QoS architectures and, moreover, splitting of the control strategy from data forwarding mechanisms, so we guarantee a set of absolute stable mechanisms on top of which Internet QoS can be built. DQARE architecture is furnished with three relevant traffic control schemes, namely, service differentiation, QoS routing, and traffic engineering. The main objective of this paper is to (i) provide a general configuration guideline for service differentiation, (ii) formalize the theoretical properties of different QoS routing algorithms and then introduce a QoS routing algorithm (QOPRA) based on dynamic programming technique, and (iii) propose QoS multipath forwarding (QMPF) model for paths diversity exploitation. NS2-based simulations proved the DQARE superiority in terms of delay, packet delivery ratio, throughput, and control overhead. Moreover, extensive simulations are used to compare the proposed QOPRA algorithm and QMPF model with their counterparts in the literature. PMID:25309955
Diverse Planning for UAV Control and Remote Sensing.
Tožička, Jan; Komenda, Antonín
2016-12-21
Unmanned aerial vehicles (UAVs) are suited to various remote sensing missions, such as measuring air quality. The conventional method of UAV control is by human operators. Such an approach is limited by the ability of cooperation among the operators controlling larger fleets of UAVs in a shared area. The remedy for this is to increase autonomy of the UAVs in planning their trajectories by considering other UAVs and their plans. To provide such improvement in autonomy, we need better algorithms for generating alternative trajectory variants that the UAV coordination algorithms can utilize. In this article, we define a novel family of multi-UAV sensing problems, solving task allocation of huge number of tasks (tens of thousands) to a group of configurable UAVs with non-zero weight of equipped sensors (comprising the air quality measurement as well) together with two base-line solvers. To solve the problem efficiently, we use an algorithm for diverse trajectory generation and integrate it with a solver for the multi-UAV coordination problem. Finally, we experimentally evaluate the multi-UAV sensing problem solver. The evaluation is done on synthetic and real-world-inspired benchmarks in a multi-UAV simulator. Results show that diverse planning is a valuable method for remote sensing applications containing multiple UAVs.
Elaziz, Mohamed Abd; Hemdan, Ahmed Monem; Hassanien, AboulElla; Oliva, Diego; Xiong, Shengwu
2017-09-07
The current economics of the fish protein industry demand rapid, accurate and expressive prediction algorithms at every step of protein production especially with the challenge of global climate change. This help to predict and analyze functional and nutritional quality then consequently control food allergies in hyper allergic patients. As, it is quite expensive and time-consuming to know these concentrations by the lab experimental tests, especially to conduct large-scale projects. Therefore, this paper introduced a new intelligent algorithm using adaptive neuro-fuzzy inference system based on whale optimization algorithm. This algorithm is used to predict the concentration levels of bioactive amino acids in fish protein hydrolysates at different times during the year. The whale optimization algorithm is used to determine the optimal parameters in adaptive neuro-fuzzy inference system. The results of proposed algorithm are compared with others and it is indicated the higher performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Drumheller, Z. W.; Regnery, J.; Lee, J. H.; Illangasekare, T. H.; Kitanidis, P. K.; Smits, K. M.
2014-12-01
Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization led to reduced natural recharge rates and overuse. Scientists and engineers have begun to re-investigate the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. MAR systems offer the possibility of naturally increasing groundwater storage while improving the quality of impaired water used for recharge. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. Our project seeks to ease the operational challenges of MAR facilities through the implementation of active sensor networks, adaptively calibrated flow and transport models, and simulation-based meta-heuristic control optimization methods. The developed system works by continually collecting hydraulic and water quality data from a sensor network embedded within the aquifer. The data is fed into an inversion algorithm, which calibrates the parameters and initial conditions of a predictive flow and transport model. The calibrated model is passed to a meta-heuristic control optimization algorithm (e.g. genetic algorithm) to execute the simulations and determine the best course of action, i.e., the optimal pumping policy for current aquifer conditions. The optimal pumping policy is manually or autonomously applied. During operation, sensor data are used to assess the accuracy of the optimal prediction and augment the pumping strategy as needed. At laboratory-scale, a small (18"H x 46"L) and an intermediate (6'H x 16'L) two-dimensional synthetic aquifer were constructed and outfitted with sensor networks. Data collection and model inversion components were developed and sensor data were validated by analytical measurements.
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
A novel symbiotic organisms search algorithm for congestion management in deregulated environment
NASA Astrophysics Data System (ADS)
Verma, Sumit; Saha, Subhodip; Mukherjee, V.
2017-01-01
In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.
A novel symbiotic organisms search algorithm for congestion management in deregulated environment
NASA Astrophysics Data System (ADS)
Verma, Sumit; Saha, Subhodip; Mukherjee, V.
2017-01-01
In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool-based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population-based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu
2015-05-01
A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Suchacka, Grazyna
2005-02-01
The paper concerns a new research area that is Quality of Web Service (QoWS). The need for QoWS is motivated by a still growing number of Internet users, by a steady development and diversification of Web services, and especially by popularization of e-commerce applications. The goal of the paper is a critical analysis of the literature concerning scheduling algorithms for e-commerce Web servers. The paper characterizes factors affecting the load of the Web servers and discusses ways of improving their efficiency. Crucial QoWS requirements of the business Web server are identified: serving requests before their individual deadlines, supporting user session integrity, supporting different classes of users and minimizing a number of rejected requests. It is justified that meeting these requirements and implementing them in an admission control (AC) and scheduling algorithm for the business Web server is crucial to the functioning of e-commerce Web sites and revenue generated by them. The paper presents results of the literature analysis and discusses algorithms that implement these important QoWS requirements. The analysis showed that very few algorithms take into consideration the above mentioned factors and that there is a need for designing an algorithm implementing them.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-01-01
The increase and the complexity of data caused by the uncertain environment is today’s reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006–2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality. PMID:26569283
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Oliver, Emerson; Smith, Austin
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.
Data Assimilation Experiments Using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2009-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains a number of significant improvements over Version 4. Two very significant improvements are described briefly below. 1) The AIRS Science Team Radiative Transfer Algorithm (RTA) has now been upgraded to accurately account for effects of non-local thermodynamic equilibrium on the AIRS observations. This allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval algorithm during both day and night. Following theoretical considerations, tropospheric temperature profile information is obtained almost exclusively from clear column radiances in the 4.3 micron CO2 band in the AIRS Version 5 temperature profile retrieval step. These clear column radiances are a derived product that are indicative of radiances AIRS channels would have seen if the field of view were completely clear. Clear column radiances for all channels are determined using tropospheric sounding 15 micron CO2 observations. This approach allows for the generation of accurate values of clear column radiances and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel clear column radiances. These error estimates are used for quality control of the retrieved products. Based on error estimate thresholds, each temperature profiles is assigned a characteristic pressure, pg, down to which the profile is characterized as good for use for data assimilation purposes. We have conducted forecast impact experiments assimilating AIRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM, at a spatial resolution of 0.5 deg by 0.5 deg. Assimilation of Quality Controlled AIRS temperature profiles down to pg resulted in significantly improved forecast skill compared to that obtained from experiments when all data used operationally by NCEP, except for AIRS data, is assimilated. These forecasts were also significantly better than to those obtained when AIRS radiances (rather than temperature profiles) are assimilated, which is the way AIRS data is used operationally by NCEP and ECMWF.
NASA Technical Reports Server (NTRS)
Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto
2008-01-01
This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.
NASA Astrophysics Data System (ADS)
Wang, Yunyun; Li, Hui; Liu, Yuze; Ji, Yuefeng; Li, Hongfa
2017-10-01
With the development of large video services and cloud computing, the network is increasingly in the form of services. In SDON, the SDN controller holds the underlying physical resource information, thus allocating the appropriate resources and bandwidth to the VON service. However, for some services that require extremely strict QoT (quality of transmission), the shortest distance path algorithm is often unable to meet the requirements because it does not take the link spectrum resources into account. And in accordance with the choice of the most unoccupied links, there may be more spectrum fragments. So here we propose a new RMLSA (the routing, modulation Level, and spectrum allocation) algorithm to reduce the blocking probability. The results show about 40% less blocking probability than the shortest-distance algorithm and the minimum usage of the spectrum priority algorithm. This algorithm is used to satisfy strict request of QoT for demands.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
Comparison of predictive control methods for high consumption industrial furnace.
Stojanovski, Goran; Stankovski, Mile
2013-01-01
We describe several predictive control approaches for high consumption industrial furnace control. These furnaces are major consumers in production industries, and reducing their fuel consumption and optimizing the quality of the products is one of the most important engineer tasks. In order to demonstrate the benefits from implementation of the advanced predictive control algorithms, we have compared several major criteria for furnace control. On the basis of the analysis, some important conclusions have been drawn.
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.
2015-12-01
The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.
The Early Development and Evolution of the QARTOD for In-Situ Wave Measurements
NASA Astrophysics Data System (ADS)
Bouchard, R. H.; Thomas, J.; Teng, C. C.; Burnett, W.; Castel, D.
2017-12-01
In 2013 the US Integrated Ocean Observing System (IOOS) Program Office issued manual was for the Real-Time Quality Control of In-Situ Surface Wave Data. This was one of the first Quality Assurance/Quality Control of Real-Time Oceanographic Data (QARTOD) manuals that now cover 11 different in situ measurements. This landmark document was the product of an effort initiated at a 2002 OCEANS.US workshop. The workshop identified and prioritized ocean variables with directional waves ranked as one of the highest key variables for inclusion into a national backbone of observations. The workshop was the impetus that led to the first QARTOD meeting in 2003 that involved over 80 participants and developed minimum standards for real-time observations. Over three more QARTOD meetings, two ad hoc meetings, and many hours of coordination and review involving dozens of wave measurement users and providers, consensus was reached on a list of essential "must Do's" for quality control tests that eventually formed the basis of the manual. The IOOS QARTOD manual established a standard format, provided greater detail, included codeable examples of the algorithms and established a framework for periodic reviews and updates. While the bulk of the quality control procedures originated with those being used by the Coastal Data Information Program (CDIP) and by NOAA's National Data Buoy Center (NDBC), QARTOD also enlisted the expertise of federal, academic, and industry partners. CDIP and NDBC each operated more than 100 wave measuring buoys and had long histories of wave buoy measurements. In addition to buoy measurements, users and providers of fixed and bottom-mounted wave systems were also included. This paper examines the individual contributions, early developments, and the evolution of the QARTOD wave measurement efforts that culminated in the US IOOS manual. These efforts serve as an example of how individuals with a common interest and dedication can achieve results for the common good. Quality control algorithms of value, but not included in the essential list and further quality control advancements outside of the QARTOD will also be reviewed.
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
NASA Astrophysics Data System (ADS)
Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.
2017-11-01
The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams-Hayes, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.
Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms
NASA Astrophysics Data System (ADS)
Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.
Wood industrial application for quality control using image processing
NASA Astrophysics Data System (ADS)
Ferreira, M. J. O.; Neves, J. A. C.
1994-11-01
This paper describes an application of image processing for the furniture industry. It uses an input data, images acquired directly from wood planks where defects were previously marked by an operator. A set of image processing algorithms separates and codes each defect and detects a polygonal approach of the line representing them. For such a purpose we developed a pattern classification algorithm and a new technique of segmenting defects by carving the convex hull of the binary shape representing each isolated defect.
Speech coding at 4800 bps for mobile satellite communications
NASA Technical Reports Server (NTRS)
Gersho, Allen; Chan, Wai-Yip; Davidson, Grant; Chen, Juin-Hwey; Yong, Mei
1988-01-01
A speech compression project has recently been completed to develop a speech coding algorithm suitable for operation in a mobile satellite environment aimed at providing telephone quality natural speech at 4.8 kbps. The work has resulted in two alternative techniques which achieve reasonably good communications quality at 4.8 kbps while tolerating vehicle noise and rather severe channel impairments. The algorithms are embodied in a compact self-contained prototype consisting of two AT and T 32-bit floating-point DSP32 digital signal processors (DSP). A Motorola 68HC11 microcomputer chip serves as the board controller and interface handler. On a wirewrapped card, the prototype's circuit footprint amounts to only 200 sq cm, and consumes about 9 watts of power.
A procedure for testing the quality of LANDSAT atmospheric correction algorithms
NASA Technical Reports Server (NTRS)
Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.
1982-01-01
There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.
Investigation of cloud/water vapor motion winds from geostationary satellite
NASA Technical Reports Server (NTRS)
1993-01-01
This report summarizes the research work accomplished on the NASA grant contract NAG8-892 during 1992. Research goals of this contract are the following: to complete upgrades to the Cooperative Institute for Meteorological Satellite Studies (CIMSS) wind system procedures for assigning heights and incorporating first guess information; to evaluate these modifications using simulated tracer fields; to add an automated quality control system to minimize the need for manual editing, while maintaining product quality; and to benchmark the upgraded algorithm in tests with NMC and/or MSFC. Work progressed on all these tasks and is detailed. This work was done in collaboration with CIMSS NOAA/NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A
2016-08-01
Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.
Navigating a ship with a broken compass: evaluating standard algorithms to measure patient safety.
Hefner, Jennifer L; Huerta, Timothy R; McAlearney, Ann Scheck; Barash, Barbara; Latimer, Tina; Moffatt-Bruce, Susan D
2017-03-01
Agency for Healthcare Research and Quality (AHRQ) software applies standardized algorithms to hospital administrative data to identify patient safety indicators (PSIs). The objective of this study was to assess the validity of PSI flags and report reasons for invalid flagging. At a 6-hospital academic medical center, a retrospective analysis was conducted of all PSIs flagged in fiscal year 2014. A multidisciplinary PSI Quality Team reviewed each flagged PSI based on quarterly reports. The positive predictive value (PPV, the percent of clinically validated cases) was calculated for 12 PSI categories. The documentation for each reversed case was reviewed to determine the reasons for PSI reversal. Of 657 PSI flags, 185 were reversed. Seven PSI categories had a PPV below 75%. Four broad categories of reasons for reversal were AHRQ algorithm limitations (38%), coding misinterpretations (45%), present upon admission (10%), and documentation insufficiency (7%). AHRQ algorithm limitations included 2 subcategories: an "incident" was inherent to the procedure, or highly likely (eg, vascular tumor bleed), or an "incident" was nonsignificant, easily controlled, and/or no intervention was needed. These findings support previous research highlighting administrative data problems. Additionally, AHRQ algorithm limitations was an emergent category not considered in previous research. Herein we present potential solutions to address these issues. If, despite poor validity, US policy continues to rely on PSIs for incentive and penalty programs, improvements are needed in the quality of administrative data and the standardized PSI algorithms. These solutions require national motivation, research attention, and dissemination support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo
2017-02-01
In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.
PhosSA: Fast and accurate phosphorylation site assignment algorithm for mass spectrometry data.
Saeed, Fahad; Pisitkun, Trairak; Hoffert, Jason D; Rashidian, Sara; Wang, Guanghui; Gucek, Marjan; Knepper, Mark A
2013-11-07
Phosphorylation site assignment of high throughput tandem mass spectrometry (LC-MS/MS) data is one of the most common and critical aspects of phosphoproteomics. Correctly assigning phosphorylated residues helps us understand their biological significance. The design of common search algorithms (such as Sequest, Mascot etc.) do not incorporate site assignment; therefore additional algorithms are essential to assign phosphorylation sites for mass spectrometry data. The main contribution of this study is the design and implementation of a linear time and space dynamic programming strategy for phosphorylation site assignment referred to as PhosSA. The proposed algorithm uses summation of peak intensities associated with theoretical spectra as an objective function. Quality control of the assigned sites is achieved using a post-processing redundancy criteria that indicates the signal-to-noise ratio properties of the fragmented spectra. The quality assessment of the algorithm was determined using experimentally generated data sets using synthetic peptides for which phosphorylation sites were known. We report that PhosSA was able to achieve a high degree of accuracy and sensitivity with all the experimentally generated mass spectrometry data sets. The implemented algorithm is shown to be extremely fast and scalable with increasing number of spectra (we report up to 0.5 million spectra/hour on a moderate workstation). The algorithm is designed to accept results from both Sequest and Mascot search engines. An executable is freely available at http://helixweb.nih.gov/ESBL/PhosSA/ for academic research purposes.
In-situ quality monitoring during laser brazing
NASA Astrophysics Data System (ADS)
Ungers, Michael; Fecker, Daniel; Frank, Sascha; Donst, Dmitri; Märgner, Volker; Abels, Peter; Kaierle, Stefan
Laser brazing of zinc coated steel is a widely established manufacturing process in the automotive sector, where high quality requirements must be fulfilled. The strength, impermeablitiy and surface appearance of the joint are particularly important for judging its quality. The development of an on-line quality control system is highly desired by the industry. This paper presents recent works on the development of such a system, which consists of two cameras operating in different spectral ranges. For the evaluation of the system, seam imperfections are created artificially during experiments. Finally image processing algorithms for monitoring process parameters based the captured images are presented.
NASA Astrophysics Data System (ADS)
Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar
2016-12-01
This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.
Motion adaptive Kalman filter for super-resolution
NASA Astrophysics Data System (ADS)
Richter, Martin; Nasse, Fabian; Schröder, Hartmut
2011-01-01
Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.
Medical Image Processing Server applied to Quality Control of Nuclear Medicine.
NASA Astrophysics Data System (ADS)
Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.
2016-04-01
This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
Quality of service routing in wireless ad hoc networks
NASA Astrophysics Data System (ADS)
Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh
2003-08-01
An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.
Internal quality control: planning and implementation strategies.
Westgard, James O
2003-11-01
The first essential in setting up internal quality control (IQC) of a test procedure in the clinical laboratory is to select the proper IQC procedure to implement, i.e. choosing the statistical criteria or control rules, and the number of control measurements, according to the quality required for the test and the observed performance of the method. Then the right IQC procedure must be properly implemented. This review focuses on strategies for planning and implementing IQC procedures in order to improve the quality of the IQC. A quantitative planning process is described that can be implemented with graphical tools such as power function or critical-error graphs and charts of operating specifications. Finally, a total QC strategy is formulated to minimize cost and maximize quality. A general strategy for IQC implementation is recommended that employs a three-stage design in which the first stage provides high error detection, the second stage low false rejection and the third stage prescribes the length of the analytical run, making use of an algorithm involving the average of normal patients' data.
NASA Astrophysics Data System (ADS)
Wilson, Eric Lee
Due to increased competition in a world economy, steel companies are currently interested in developing techniques that will allow for the improvement of the steelmaking process, either by increasing output efficiency or by improving the quality of their product, or both. Slag foaming is one practice that has been shown to contribute to both these goals. However, slag foaming is highly dynamic and difficult to model or control. This dissertation describes an effort to use artificial intelligence-based tools (genetic algorithms, fuzzy logic, and neural networks) to both model and control the slag foaming process. Specifically, a neural network is trained and tested on slag foaming data provided by a steel plant. This neural network model is then controlled by a fuzzy logic controller, which in turn is optimized by a genetic algorithm. This tuned controller is then installed at a steel plant and given control be a more efficient slag foaming controller than what was previously used by the steel plant.
Feasibility of overnight closed-loop control based on hourly blood glucose measurements.
Patte, Caroline; Pleus, Stefan; Galley, Paul; Weinert, Stefan; Haug, Cornelia; Freckmann, Guido
2012-07-01
Safe and effective closed-loop control (artificial pancreas) is the ultimate goal of insulin delivery. In this study, we examined the performance of a closed-loop control algorithm used for the overnight time period to safely achieve a narrow target range of blood glucose (BG) concentrations prior to breakfast. The primary goal was to compare the quality of algorithm control during repeated overnight experiments. Twenty-three subjects with type 1 diabetes performed 2 overnight experiments on each of three visits at the study site, resulting in 138 overnight experiments. On the first evening, the subject's insulin therapy was applied; on the second, the insulin was delivered by an algorithm based on subcutaneous continuous glucose measurements (including meal control) until midnight. Overnight closed-loop control was applied between midnight and 6 a.m. based on hourly venous BG measurements during the first and second nights. The number of BG values within the target range (90-150 mg/dl) increased from 52.9% (219 out of 414 measurements) during the first nights to 72.2% (299 out of 414 measurements) during the second nights (p < .001, χ²-test). The occurrence of hypoglycemia interventions was reduced from 14 oral glucose interventions, the latest occurring at 2:36 a.m. during the first nights, to 1 intervention occurring at 1:02 a.m. during the second nights (p < .001, χ²-test). Overnight controller performance improved when optimized initial control was given; this was suggested by the better metabolic control during the second night. Adequate controller run-in time seems to be important for achieving good overnight control. In addition, the findings demonstrate that hourly BG data are sufficient for the closed-loop control algorithm tested to achieve appropriate glycemic control. © 2012 Diabetes Technology Society.
Karimi, Mohammad H; Asemani, Davud
2014-05-01
Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
78 FR 57639 - Request for Comments on Pediatric Planned Procedure Algorithm
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... Comments on Pediatric Planned Procedure Algorithm AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Notice of request for comments on pediatric planned procedure algorithm from the members... Quality (AHRQ) is requesting comments from the public on an algorithm for identifying pediatric planned...
Kligerman, Seth; Mehta, Dhruv; Farnadesh, Mahmmoudreza; Jeudy, Jean; Olsen, Kathryn; White, Charles
2013-01-01
To determine whether an iterative reconstruction (IR) technique (iDose, Philips Healthcare) can reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography (CTPA). The study was Health Insurance Portability and Accountability Act compliant and approved by our institutional review board. A total of 33 obese patients (average body mass index: 42.7) underwent CTPA studies following standard departmental protocols. The data were reconstructed with filtered back projection (FBP) and 3 iDose strengths (iDoseL1, iDoseL3, and iDoseL5) for a total of 132 studies. FBP data were collected from 33 controls (average body mass index: 22) undergoing CTPA. Regions of interest were drawn at 6 identical levels in the pulmonary artery (PA), from the main PA to a subsegmental branch, in both the control group and study groups using each algorithm. Noise and attenuation were measured at all PA levels. Three thoracic radiologists graded each study on a scale of 1 (very poor) to 5 (ideal) by 4 categories: image quality, noise, PA enhancement, and "plastic" appearance. Statistical analysis was performed using an unpaired t test, 1-way analysis of variance, and linear weighted κ. Compared with the control group, there was significantly higher noise with FBP, iDoseL1, and iDoseL3 algorithms (P<0.001) in the study group. There was no significant difference between the noise in the control group and iDoseL5 algorithm in the study group. Analysis within the study group showed a significant and progressive decrease in noise and increase in the contrast-to-noise ratio as the level of IR was increased (P<0.001). Compared with FBP, readers graded overall image quality as being higher using iDoseL1 (P=0.0018), iDoseL3 (P<0.001), and iDoseL5 (P<0.001). Compared with FBP, there was subjective improvement in image noise and PA enhancement with increasing levels of iDose. The use of an IR technique leads to qualitative and quantitative improvements in image noise and image quality in obese patients undergoing CTPA.
NASA Technical Reports Server (NTRS)
Barbre, Robert E., Jr.
2012-01-01
This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown
Prieto, Sandra P.; Lai, Keith K.; Laryea, Jonathan A.; Mizell, Jason S.; Muldoon, Timothy J.
2016-01-01
Abstract. Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893
e-DMDAV: A new privacy preserving algorithm for wearable enterprise information systems
NASA Astrophysics Data System (ADS)
Zhang, Zhenjiang; Wang, Xiaoni; Uden, Lorna; Zhang, Peng; Zhao, Yingsi
2018-04-01
Wearable devices have been widely used in many fields to improve the quality of people's lives. More and more data on individuals and businesses are collected by statistical organizations though those devices. Almost all of this data holds confidential information. Statistical Disclosure Control (SDC) seeks to protect statistical data in such a way that it can be released without giving away confidential information that can be linked to specific individuals or entities. The MDAV (Maximum Distance to Average Vector) algorithm is an efficient micro-aggregation algorithm belonging to SDC. However, the MDAV algorithm cannot survive homogeneity and background knowledge attacks because it was designed for static numerical data. This paper proposes a systematic dynamic-updating anonymity algorithm based on MDAV called the e-DMDAV algorithm. This algorithm introduces a new parameter and a table to ensure that the k records in one cluster with the range of the distinct values in each cluster is no less than e for numerical and non-numerical datasets. This new algorithm has been evaluated and compared with the MDAV algorithm. The simulation results show that the new algorithm outperforms MDAV in terms of minimizing distortion and disclosure risk with a similar computational cost.
Automation of testing modules of controller ELSY-ТМК
NASA Astrophysics Data System (ADS)
Dolotov, A. E.; Dolotova, R. G.; Petuhov, D. V.; Potapova, A. P.
2017-01-01
In modern life, there are means for automation of various processes which allow one to provide high quality standards of released products and to raise labour efficiency. In the given paper, the data on the automation of the test process of the ELSY-TMK controller [1] is presented. The ELSY-TMK programmed logic controller is an effective modular platform for construction of automation systems for small and average branches of industrial production. The modern and functional standard of communication and open environment of the logic controller give a powerful tool of wide spectrum applications for industrial automation. The algorithm allows one to test controller modules by operating the switching system and external devices faster and at a higher level of quality than a human without such means does.
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-12-01
Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-01-01
Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335
Sunglint Detection for Unmanned and Automated Platforms
Garaba, Shungudzemwoyo Pascal; Schulz, Jan; Wernand, Marcel Robert; Zielinski, Oliver
2012-01-01
We present an empirical quality control protocol for above-water radiometric sampling focussing on identifying sunglint situations. Using hyperspectral radiometers, measurements were taken on an automated and unmanned seaborne platform in northwest European shelf seas. In parallel, a camera system was used to capture sea surface and sky images of the investigated points. The quality control consists of meteorological flags, to mask dusk, dawn, precipitation and low light conditions, utilizing incoming solar irradiance (ES) spectra. Using 629 from a total of 3,121 spectral measurements that passed the test conditions of the meteorological flagging, a new sunglint flag was developed. To predict sunglint conspicuous in the simultaneously available sea surface images a sunglint image detection algorithm was developed and implemented. Applying this algorithm, two sets of data, one with (having too much or detectable white pixels or sunglint) and one without sunglint (having least visible/detectable white pixel or sunglint), were derived. To identify the most effective sunglint flagging criteria we evaluated the spectral characteristics of these two data sets using water leaving radiance (LW) and remote sensing reflectance (RRS). Spectral conditions satisfying ‘mean LW (700–950 nm) < 2 mW·m−2·nm−1·Sr−1’ or alternatively ‘minimum RRS (700–950 nm) < 0.010 Sr−1’, mask most measurements affected by sunglint, providing an efficient empirical flagging of sunglint in automated quality control.
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
Quality of clinical brain tumor MR spectra judged by humans and machine learning tools.
Kyathanahally, Sreenath P; Mocioiu, Victor; Pedrosa de Barros, Nuno; Slotboom, Johannes; Wright, Alan J; Julià-Sapé, Margarida; Arús, Carles; Kreis, Roland
2018-05-01
To investigate and compare human judgment and machine learning tools for quality assessment of clinical MR spectra of brain tumors. A very large set of 2574 single voxel spectra with short and long echo time from the eTUMOUR and INTERPRET databases were used for this analysis. Original human quality ratings from these studies as well as new human guidelines were used to train different machine learning algorithms for automatic quality control (AQC) based on various feature extraction methods and classification tools. The performance was compared with variance in human judgment. AQC built using the RUSBoost classifier that combats imbalanced training data performed best. When furnished with a large range of spectral and derived features where the most crucial ones had been selected by the TreeBagger algorithm it showed better specificity (98%) in judging spectra from an independent test-set than previously published methods. Optimal performance was reached with a virtual three-class ranking system. Our results suggest that feature space should be relatively large for the case of MR tumor spectra and that three-class labels may be beneficial for AQC. The best AQC algorithm showed a performance in rejecting spectra that was comparable to that of a panel of human expert spectroscopists. Magn Reson Med 79:2500-2510, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McClain, Charles R.; Darzi, Michael; Barnes, Robert A.; Eplee, Robert E.; Firestone, James K.; Patt, Frederick S.; Robinson, Wayne D.; Schieber, Brian D.;
1996-01-01
This document provides five brief reports that address several quality control procedures under the auspices of the Calibration and Validation Element (CVE) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 describes analyses of the 32 sensor engineering telemetry streams. Anomalies in any of the values may impact sensor performance in direct or indirect ways. The analyses are primarily examinations of parameter time series combined with statistical methods such as auto- and cross-correlation functions. Chapter 2 describes how the various onboard (solar and lunar) and vicarious (in situ) calibration data will be analyzed to quantify sensor degradation, if present. The analyses also include methods for detecting the influence of charged particles on sensor performance such as might be expected in the South Atlantic Anomaly (SAA). Chapter 3 discusses the quality control of the ancillary environmental data that are routinely received from other agencies or projects which are used in the atmospheric correction algorithm (total ozone, surface wind velocity, and surface pressure; surface relative humidity is also obtained, but is not used in the initial operational algorithm). Chapter 4 explains the procedures for screening level-, level-2, and level-3 products. These quality control operations incorporate both automated and interactive procedures which check for file format errors (all levels), navigation offsets (level-1), mask and flag performance (level-2), and product anomalies (all levels). Finally, Chapter 5 discusses the match-up data set development for comparing SeaWiFS level-2 derived products with in situ observations, as well as the subsequent outlier analyses that will be used for evaluating error sources.
Hansen, J H; Nandkumar, S
1995-01-01
The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Xiao, Nengwu; Chen, Chen; Yuan, Weicheng; Qiu, Kun
2016-02-01
We propose an energy-efficient orthogonal frequency division multiplexing-based passive optical network (OFDM-PON) using adaptive sleep-mode control and dynamic bandwidth allocation. In this scheme, a bidirectional-centralized algorithm named the receiver and transmitter accurate sleep control and dynamic bandwidth allocation (RTASC-DBA), which has an overall bandwidth scheduling policy, is employed to enhance the energy efficiency of the OFDM-PON. The RTASC-DBA algorithm is used in an optical line terminal (OLT) to control the sleep mode of an optical network unit (ONU) sleep and guarantee the quality of service of different services of the OFDM-PON. The obtained results show that, by using the proposed scheme, the average power consumption of the ONU is reduced by ˜40% when the normalized ONU load is less than 80%, compared with the average power consumption without using the proposed scheme.
NASA Technical Reports Server (NTRS)
Cheng, Rendy P.; Tischler, Mark B.; Celi, Roberto
2006-01-01
This research describes a new methodology for the extraction of a high-order, linear time invariant model, which allows the periodicity of the helicopter response to be accurately captured. This model provides the needed level of dynamic fidelity to permit an analysis and optimization of the AFCS and HHC algorithms. The key results of this study indicate that the closed-loop HHC system has little influence on the AFCS or on the vehicle handling qualities, which indicates that the AFCS does not need modification to work with the HHC system. However, the results show that the vibration response to maneuvers must be considered during the HHC design process, and this leads to much higher required HHC loop crossover frequencies. This research also demonstrates that the transient vibration responses during maneuvers can be reduced by optimizing the closed-loop higher harmonic control algorithm using conventional control system analyses.
Feedback control in deep drawing based on experimental datasets
NASA Astrophysics Data System (ADS)
Fischer, P.; Heingärtner, J.; Aichholzer, W.; Hortig, D.; Hora, P.
2017-09-01
In large-scale production of deep drawing parts, like in automotive industry, the effects of scattering material properties as well as warming of the tools have a significant impact on the drawing result. In the scope of the work, an approach is presented to minimize the influence of these effects on part quality by optically measuring the draw-in of each part and adjusting the settings of the press to keep the strain distribution, which is represented by the draw-in, inside a certain limit. For the design of the control algorithm, a design of experiments for in-line tests is used to quantify the influence of the blank holder force as well as the force distribution on the draw-in. The results of this experimental dataset are used to model the process behavior. Based on this model, a feedback control loop is designed. Finally, the performance of the control algorithm is validated in the production line.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haan, J.F. de; Kokke, J.M.M.; Hoogenboom, H.J.
1997-06-01
Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air-water interface correction, and application of water quality algorithms. A prototype version of an integrated software environment has recently been developed that enables the user to perform and control these processing steps. Major parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code, (ii) a database of water quality algorithms, and (iii) a spectral library of Dutch coastal and inland waters, containing subsurface irradiance reflectance spectra and associated water quality parameters. The atmosphericmore » correction part of this environment is discussed here. It is shown that this part can be used to accurately retrieve spectral signatures of inland water for wavelengths between 450 and 750 nm, provided in situ measurements are used to determine atmospheric model parameters. Assessment of the usefulness of the completely integrated software system in an operational environment requires a revised version that is presently being developed.« less
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
A Comprehensive Framework for Use of NEXRAD Data in Hydrometeorology and Hydrology
NASA Astrophysics Data System (ADS)
Krajewski, W. F.; Bradley, A.; Kruger, A.; Lawrence, R. E.; Smith, J. A.; Steiner, M.; Ramamurthy, M. K.; del Greco, S. A.
2004-12-01
The overall objective of this project is to provide the broad science and engineering communities with ready access to the vast archives and real-time information collected by the national network of NEXRAD weather radars. The main focus is on radar-rainfall data for use in hydrology, hydrometeorology, and water resources. Currently, the NEXRAD data, which are archived at NOAA's National Climatic Data Center (NCDC), are converted to operational products and used by forecasters in real time. The scientific use of the full resolution NEXRAD information is presently limited because current methods of accessing this data require considerable expertise in weather radars, data quality control, formatting and handling, and radar-rainfall algorithms. The goal is to provide professionals in the scientific, engineering, education, and public policy sectors with on-demand NEXRAD data and custom products that are at high spatial and temporal resolutions. Furthermore, the data and custom products will be of a quality suitable for scientific discovery in hydrology and hydrometeorology and in data formats that are convenient to a wide spectrum of users. We are developing a framework and a set of tools for access, visualization, management, rainfall estimation algorithms, and scientific analysis of full resolution NEXRAD data. The framework will address the issues of data dissemination, format conversions and compression, management of terabyte-sized datasets, rapid browsing and visualization, metadata selection and calculation, relational and XML databases, integration with geographic information systems, data queries and knowledge mining, and Web Services. The tools will perform instantaneous comprehensive quality control and radar-rainfall estimation using a variety of algorithms. The algorithms that the user can select will range from "quick look" to complex, and computing-intensive and will include operational algorithms used by federal agencies as well as research grade experimental methods. Options available to the user will include user-specified spatial and temporal resolution, ancillary products such as storm advection velocity fields, estimation of uncertainty associated with rainfall maps, and mathematical synthesis of the products. The data and the developed tools will be provided to the community via the services and the infrastructure of Unidata and the NCDC.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
"Updates to Model Algorithms & Inputs for the Biogenic ...
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-06-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-03-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
Study of efficient video compression algorithms for space shuttle applications
NASA Technical Reports Server (NTRS)
Poo, Z.
1975-01-01
Results are presented of a study on video data compression techniques applicable to space flight communication. This study is directed towards monochrome (black and white) picture communication with special emphasis on feasibility of hardware implementation. The primary factors for such a communication system in space flight application are: picture quality, system reliability, power comsumption, and hardware weight. In terms of hardware implementation, these are directly related to hardware complexity, effectiveness of the hardware algorithm, immunity of the source code to channel noise, and data transmission rate (or transmission bandwidth). A system is recommended, and its hardware requirement summarized. Simulations of the study were performed on the improved LIM video controller which is computer-controlled by the META-4 CPU.
Water Quality Planning in Rivers: Assimilative Capacity and Dilution Flow.
Hashemi Monfared, Seyed Arman; Dehghani Darmian, Mohsen; Snyder, Shane A; Azizyan, Gholamreza; Pirzadeh, Bahareh; Azhdary Moghaddam, Mehdi
2017-11-01
Population growth, urbanization and industrial expansion are consequentially linked to increasing pollution around the world. The sources of pollution are so vast and also include point and nonpoint sources, with intrinsic challenge for control and abatement. This paper focuses on pollutant concentrations and also the distance that the pollution is in contact with the river water as objective functions to determine two main necessary characteristics for water quality management in the river. These two necessary characteristics are named assimilative capacity and dilution flow. The mean area of unacceptable concentration [Formula: see text] and affected distance (X) are considered as two objective functions to determine the dilution flow by a non-dominated sorting genetic algorithm II (NSGA-II) optimization algorithm. The results demonstrate that the variation of river flow discharge in different seasons can modify the assimilation capacity up to 97%. Moreover, when using dilution flow as a water quality management tool, results reveal that the content of [Formula: see text] and X change up to 97% and 93%, respectively.
Rain Rate and DSD Retrievals at Kwajalein Atoll
NASA Astrophysics Data System (ADS)
Wolff, David; Marks, David; Tokay, Ali
2010-05-01
The dual-polarization weather radar on Kwajalein Atoll in the Republic of the Marshall Islands (KPOL) is one of the only full-time (24/7) operational S-band dual-polarimetric (DP) radars in the tropics. Using the DP data from KPOL, as well as data from a Joss-Waldvogel disdrometer on Kwajalein Island, algorithms for quality control, as well as calibration of reflectivity and differential reflectivity have been developed and adapted for application in a near real-time operational environment. Observations during light rain and drizzle show that KPOL measurements (since 2006) meet or exceed quality thresholds for these applications (as determined by consensus of the radar community). While the methodology for development of such applications is well documented, tuning of specific algorithms to a particular regime and observed raindrop size distributions requires a comprehensive testing and adjustment period to ensure high quality products. Upon application of these data quality techniques to the KPOL data, we have tested and compared several different rain retrieval algorithms. These include conventional Z-R, DP hybrid techniques, as well as polarimetrically-tuned Z-R described by Bringi et al. 2004. One of the major benefits of the polarimetrically tuned Z-R technique, is its ability to use the DP observations to retrieve key parameters of the drop size distribution (DSD), such as the median drop diameter, and the intercept and shape parameter of the assumed gammaDSD. We will show several such retrievals for different rain systems, as well as their distribution with height below the melting layer. From a physical validation perspective, such DSD parameter retrievals provide an important means to cross-validate microphysical parameterizations in GPM Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) retrieval algorithms.
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.
Objective measures for quality assessment of automatic skin enhancement algorithms
NASA Astrophysics Data System (ADS)
Ciuc, Mihai; Capata, Adrian; Florea, Corneliu
2010-01-01
Automatic portrait enhancement by attenuating skin flaws (pimples, blemishes, wrinkles, etc.) has received considerable attention from digital camera manufacturers thanks to its impact on the public. Subsequently, a number of algorithms have been developed to meet this need. One central aspect to developing such an algorithm is quality assessment: having a few numbers that precisely indicate the amount of beautification brought by an algorithm (as perceived by human observers) is of great help, as it works on circumvent time-costly human evaluation. In this paper, we propose a method to numerically evaluate the quality of a skin beautification algorithm. The most important aspects we take into account and quantize to numbers are the quality of the skin detector, the amount of smoothing performed by the method, the preservation of intrinsic skin texture, and the preservation of facial features. We combine these measures into two numbers that assess the quality of skin detection and beautification. The derived measures are highly correlated with human perception, therefore they constitute a helpful tool for tuning and comparing algorithms.
Salonen, K; Leisola, M; Eerikäinen, T
2009-01-01
Determination of metabolites from an anaerobic digester with an acid base titration is considered as superior method for many reasons. This paper describes a practical at line compatible multipoint titration method. The titration procedure was improved by speed and data quality. A simple and novel control algorithm for estimating a variable titrant dose was derived for this purpose. This non-linear PI-controller like algorithm does not require any preliminary information from sample. Performance of this controller is superior compared to traditional linear PI-controllers. In addition, simplification for presenting polyprotic acids as a sum of multiple monoprotic acids is introduced along with a mathematical error examination. A method for inclusion of the ionic strength effect with stepwise iteration is shown. The titration model is presented with matrix notations enabling simple computation of all concentration estimates. All methods and algorithms are illustrated in the experimental part. A linear correlation better than 0.999 was obtained for both acetate and phosphate used as model compounds with slopes of 0.98 and 1.00 and average standard deviations of 0.6% and 0.8%, respectively. Furthermore, insensitivity of the presented method for overlapping buffer capacity curves was shown.
NASA Astrophysics Data System (ADS)
Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi
Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Video control system for a drilling in furniture workpiece
NASA Astrophysics Data System (ADS)
Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.
2018-05-01
During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.
BatMass: a Java Software Platform for LC-MS Data Visualization in Proteomics and Metabolomics.
Avtonomov, Dmitry M; Raskind, Alexander; Nesvizhskii, Alexey I
2016-08-05
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC-MS-based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC-MS data are often overlooked, and assessment of an experiment's success is based on some derived metrics such as "the number of identified compounds". The human brain interprets visual data much better than plain text, hence the saying "a picture is worth a thousand words". Here, we present the BatMass software package, which allows for performing quick quality control of raw LC-MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC-MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration.
BatMass: a Java software platform for LC/MS data visualization in proteomics and metabolomics
Avtonomov, Dmitry; Raskind, Alexander; Nesvizhskii, Alexey I.
2017-01-01
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC/MS based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC/MS data are often overlooked and assessment of an experiment's success is based on some derived metrics such as “the number of identified compounds”. Human brain interprets visual data much better than plain text, hence the saying “a picture is worth a thousand words”. Here we present BatMass software package which allows to perform quick quality control of raw LC/MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC/MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration. PMID:27306858
Practical applications of nondestructive materials characterization
NASA Astrophysics Data System (ADS)
Green, Robert E., Jr.
1992-10-01
Nondestructive evaluation (NDE) techniques are reviewed for applications to the industrial production of materials including microstructural, physical, and chemical analyses. NDE techniques addressed include: (1) double-pulse holographic interferometry for sealed-package leak testing; (2) process controls for noncontact metals fabrication; (3) ultrasonic detections of oxygen contamination in titanium welds; and (4) scanning acoustic microscopy for the evaluation of solder bonds. The use of embedded sensors and emerging NDE concepts provides the means for controlling the manufacturing and quality of quartz crystal resonators, nickel single-crystal turbine blades, and integrated circuits. Advances in sensor technology and artificial intelligence algorithms and the use of embedded sensors combine to make NDE technology highly effective in controlling industrial materials manufacturing and the quality of the products.
Recent National Transonic Facility Test Process Improvements (Invited)
NASA Technical Reports Server (NTRS)
Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.
2001-01-01
This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feed-forward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.
Recent National Transonic Facility Test Process Improvements (Invited)
NASA Technical Reports Server (NTRS)
Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.
2001-01-01
This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feedforward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.
The problem of the driverless vehicle specified path stability control
NASA Astrophysics Data System (ADS)
Buznikov, S. E.; Endachev, D. V.; Elkin, D. S.; Strukov, V. O.
2018-02-01
Currently the effort of many leading foreign companies is focused on creation of driverless transport for transportation of cargo and passengers. Among many practical problems arising while creating driverless vehicles, the problem of the specified path stability control occupies a central place. The purpose of this paper is formalization of the problem in question in terms of the quadratic functional of the control quality, the comparative analysis of the possible solutions and justification of the choice of the optimum technical solution. As square value of the integral of the deviation from the specified path is proposed as the quadratic functional of the control quality. For generation of the set of software and hardware solution variants the Zwicky “morphological box” method is used within the hardware and software environments. The heading control algorithms use the wheel steering angle data and the deviation from the lane centerline (specified path) calculated based on the navigation data and the data from the video system. Where the video system does not detect the road marking, the control is carried out based on the wheel navigation system data and where recognizable road marking exits - based on to the video system data. The analysis of the test results allows making the conclusion that the application of the combined navigation system algorithms that provide quasi-optimum solution of the problem while meeting the strict functional limits for the technical and economic indicators of the driverless vehicle control system under development is effective.
McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David
2018-07-01
A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho
2018-01-01
To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p < 0.001) and for the overall image quality (mean, 3.8; range, 3.3-4.0; p < 0.001). The most preferred anatomical regions were the azygoesophageal recess, thoracic spine, and unobscured lung. The visibility of chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.
Immunity-Based Aircraft Fault Detection System
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
In the study reported in this paper, we have developed and applied an Artificial Immune System (AIS) algorithm for aircraft fault detection, as an extension to a previous work on intelligent flight control (IFC). Though the prior studies had established the benefits of IFC, one area of weakness that needed to be strengthened was the control dead band induced by commanding a failed surface. Since the IFC approach uses fault accommodation with no detection, the dead band, although it reduces over time due to learning, is present and causes degradation in handling qualities. If the failure can be identified, this dead band can be further A ed to ensure rapid fault accommodation and better handling qualities. The paper describes the application of an immunity-based approach that can detect a broad spectrum of known and unforeseen failures. The approach incorporates the knowledge of the normal operational behavior of the aircraft from sensory data, and probabilistically generates a set of pattern detectors that can detect any abnormalities (including faults) in the behavior pattern indicating unsafe in-flight operation. We developed a tool called MILD (Multi-level Immune Learning Detection) based on a real-valued negative selection algorithm that can generate a small number of specialized detectors (as signatures of known failure conditions) and a larger set of generalized detectors for unknown (or possible) fault conditions. Once the fault is detected and identified, an adaptive control system would use this detection information to stabilize the aircraft by utilizing available resources (control surfaces). We experimented with data sets collected under normal and various simulated failure conditions using a piloted motion-base simulation facility. The reported results are from a collection of test cases that reflect the performance of the proposed immunity-based fault detection algorithm.
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
Spettell, Claire M; Wall, Terry C; Allison, Jeroan; Calhoun, Jaimee; Kobylinski, Richard; Fargason, Rachel; Kiefe, Catarina I
2003-01-01
Background Multiple factors limit identification of patients with depression from administrative data. However, administrative data drives many quality measurement systems, including the Health Plan Employer Data and Information Set (HEDIS®). Methods We investigated two algorithms for identification of physician-recognized depression. The study sample was drawn from primary care physician member panels of a large managed care organization. All members were continuously enrolled between January 1 and December 31, 1997. Algorithm 1 required at least two criteria in any combination: (1) an outpatient diagnosis of depression or (2) a pharmacy claim for an antidepressant. Algorithm 2 included the same criteria as algorithm 1, but required a diagnosis of depression for all patients. With algorithm 1, we identified the medical records of a stratified, random subset of patients with and without depression (n=465). We also identified patients of primary care physicians with a minimum of 10 depressed members by algorithm 1 (n=32,819) and algorithm 2 (n=6,837). Results The sensitivity, specificity, and positive predictive values were: Algorithm 1: 95 percent, 65 percent, 49 percent; Algorithm 2: 52 percent, 88 percent, 60 percent. Compared to algorithm 1, profiles from algorithm 2 revealed higher rates of follow-up visits (43 percent, 55 percent) and appropriate antidepressant dosage acutely (82 percent, 90 percent) and chronically (83 percent, 91 percent) (p<0.05 for all). Conclusions Both algorithms had high false positive rates. Denominator construction (algorithm 1 versus 2) contributed significantly to variability in measured quality. Our findings raise concern about interpreting depression quality reports based upon administrative data. PMID:12968818
de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A
2017-08-20
Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.
Use of ATR FT-IR spectroscopy in non-destructive and rapid assessment of developmental cotton fibers
USDA-ARS?s Scientific Manuscript database
The knowledge of chemical and compositional components in cotton fibers is of value to cotton breeders and growers for cotton enhancement and to textile processors for quality control. In this work, we applied the previously proposed simple algorithms to analyze the attenuated total reflection Fouri...
Spreco, A; Timpka, T
2016-01-01
Objectives Reliable monitoring of influenza seasons and pandemic outbreaks is essential for response planning, but compilations of reports on detection and prediction algorithm performance in influenza control practice are largely missing. The aim of this study is to perform a metanarrative review of prospective evaluations of influenza outbreak detection and prediction algorithms restricted settings where authentic surveillance data have been used. Design The study was performed as a metanarrative review. An electronic literature search was performed, papers selected and qualitative and semiquantitative content analyses were conducted. For data extraction and interpretations, researcher triangulation was used for quality assurance. Results Eight prospective evaluations were found that used authentic surveillance data: three studies evaluating detection and five studies evaluating prediction. The methodological perspectives and experiences from the evaluations were found to have been reported in narrative formats representing biodefence informatics and health policy research, respectively. The biodefence informatics narrative having an emphasis on verification of technically and mathematically sound algorithms constituted a large part of the reporting. Four evaluations were reported as health policy research narratives, thus formulated in a manner that allows the results to qualify as policy evidence. Conclusions Awareness of the narrative format in which results are reported is essential when interpreting algorithm evaluations from an infectious disease control practice perspective. PMID:27154479
Compression of next-generation sequencing quality scores using memetic algorithm
2014-01-01
Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747
Hwang, I-Shyan
2017-01-01
The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078
Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Peissig, Peggy L; Denny, Joshua C; Kho, Abel N; Miller, Aaron; Pathak, Jyotishman
2012-01-01
The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria.
Vanetti, Eugenio; Nicolini, Giorgia; Nord, Janne; Peltola, Jarkko; Clivio, Alessandro; Fogliata, Antonella; Cozzi, Luca
2011-11-01
The RapidArc volumetric modulated arc therapy (VMAT) planning process is based on a core engine, the so-called progressive resolution optimizer (PRO). This is the optimization algorithm used to determine the combination of field shapes, segment weights (with dose rate and gantry speed variations), which best approximate the desired dose distribution in the inverse planning problem. A study was performed to assess the behavior of two versions of PRO. These two versions mostly differ in the way continuous variables describing the modulated arc are sampled into discrete control points, in the planning efficiency and in the presence of some new features. The analysis aimed to assess (i) plan quality, (ii) technical delivery aspects, (iii) agreement between delivery and calculations, and (iv) planning efficiency of the two versions. RapidArc plans were generated for four groups of patients (five patients each): anal canal, advanced lung, head and neck, and multiple brain metastases and were designed to test different levels of planning complexity and anatomical features. Plans from optimization with PRO2 (first generation of RapidArc optimizer) were compared against PRO3 (second generation of the algorithm). Additional plans were optimized with PRO3 using new features: the jaw tracking, the intermediate dose and the air cavity correction options. Results showed that (i) plan quality was generally improved with PRO3 and, although not for all parameters, some of the scored indices showed a macroscopic improvement with PRO3. (ii) PRO3 optimization leads to simpler patterns of the dynamic parameters particularly for dose rate. (iii) No differences were observed between the two algorithms in terms of pretreatment quality assurance measurements and (iv) PRO3 optimization was generally faster, with a time reduction of a factor approximately 3.5 with respect to PRO2. These results indicate that PRO3 is either clinically beneficial or neutral in terms of dosimetric quality while it showed significant advantages in speed and technical aspects.
HOLA: Human-like Orthogonal Network Layout.
Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael
2016-01-01
Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.
Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V.
2017-01-01
In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented. PMID:28736670
Verstraete, Hans R G W; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V
2017-04-01
In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Combustion distribution control using the extremum seeking algorithm
NASA Astrophysics Data System (ADS)
Marjanovic, A.; Krstic, M.; Djurovic, Z.; Kvascev, G.; Papic, V.
2014-12-01
Quality regulation of the combustion process inside the furnace is the basis of high demands for increasing robustness, safety and efficiency of thermal power plants. The paper considers the possibility of spatial temperature distribution control inside the boiler, based on the correction of distribution of coal over the mills. Such control system ensures the maintenance of the flame focus away from the walls of the boiler, and thus preserves the equipment and reduces the possibility of ash slugging. At the same time, uniform heat dissipation over mills enhances the energy efficiency of the boiler, while reducing the pollution of the system. A constrained multivariable extremum seeking algorithm is proposed as a tool for combustion process optimization with the main objective of centralizing the flame in the furnace. Simulations are conducted on a model corresponding to the 350MW boiler of the Nikola Tesla Power Plant, in Obrenovac, Serbia.
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be satisfied when the TPS-QC tool generated re-optimized plans without sacrificing other dosimetric endpoints. In addition to its feasibility and accuracy, the proposed TPS-QC tool is also user-friendly and easy to operate, both of which are necessary characteristics for clinical use. PMID:26930204
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System
NASA Astrophysics Data System (ADS)
Agarwal, Ruchi; Singh, Sanjeev
2017-12-01
The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc
2016-07-01
Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.
NASA Astrophysics Data System (ADS)
Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.
2018-04-01
High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
Ye, Bixiong; E, Xueli; Zhang, Lan
2015-01-01
To optimize non-regular drinking water quality indices (except Giardia and Cryptosporidium) of urban drinking water. Several methods including drinking water quality exceed the standard, the risk of exceeding standard, the frequency of detecting concentrations below the detection limit, water quality comprehensive index evaluation method, and attribute reduction algorithm of rough set theory were applied, redundancy factor of water quality indicators were eliminated, control factors that play a leading role in drinking water safety were found. Optimization results showed in 62 unconventional water quality monitoring indicators of urban drinking water, 42 water quality indicators could be optimized reduction by comprehensively evaluation combined with attribute reduction of rough set. Optimization of the water quality monitoring indicators and reduction of monitoring indicators and monitoring frequency could ensure the safety of drinking water quality while lowering monitoring costs and reducing monitoring pressure of the sanitation supervision departments.
NASA Astrophysics Data System (ADS)
Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin
2017-10-01
Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu
Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less
Operational Implementation of Sea Ice Concentration Estimates from the AMSR2 Sensor
NASA Technical Reports Server (NTRS)
Meier, Walter N.; Stewart, J. Scott; Liu, Yinghui; Key, Jeffrey; Miller, Jeffrey A.
2017-01-01
An operation implementation of a passive microwave sea ice concentration algorithm to support NOAA's operational mission is presented. The NASA team 2 algorithm, previously developed for the NASA advanced microwave scanning radiometer for the Earth observing system (AMSR-E) product suite, is adapted for operational use with the JAXA AMSR2 sensor through several enhancements. First, the algorithm is modified to process individual swaths and provide concentration from the most recent swaths instead of a 24-hour average. A latency (time since observation) field and a 24-hour concentration range (maximum-minimum) are included to provide indications of data timeliness and variability. Concentration from the Bootstrap algorithm is a secondary field to provide complementary sea ice information. A quality flag is implemented to provide information on interpolation, filtering, and other quality control steps. The AMSR2 concentration fields are compared with a different AMSR2 passive microwave product, and then validated via comparison with sea ice concentration from the Suomi visible and infrared imaging radiometer suite. This validation indicates the AMSR2 concentrations have a bias of 3.9% and an RMSE of 11.0% in the Arctic, and a bias of 4.45% and RMSE of 8.8% in the Antarctic. In most cases, the NOAA operational requirements for accuracy are met. However, in low-concentration regimes, such as during melt and near the ice edge, errors are higher because of the limitations of passive microwave sensors and the algorithm retrieval.
An improved robust blind motion de-blurring algorithm for remote sensing images
NASA Astrophysics Data System (ADS)
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
A demand assignment control in international business satellite communications network
NASA Astrophysics Data System (ADS)
Nohara, Mitsuo; Takeuchi, Yoshio; Takahata, Fumio; Hirata, Yasuo
An experimental system is being developed for use in an international business satellite (IBS) communications network based on demand-assignment (DA) and TDMA techniques. This paper discusses its system design, in particular from the viewpoints of a network configuration, a DA control, and a satellite channel-assignment algorithm. A satellite channel configuration is also presented along with a tradeoff study on transmission rate, HPA output power, satellite resource efficiency, service quality, and so on.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Maximum likelihood positioning algorithm for high-resolution PET scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick
2016-06-15
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less
NASA Astrophysics Data System (ADS)
Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.
The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team has developed a series of flight control concepts designed to demonstrate the benefits of a neural network-based adaptive controller. The objective of the team is to develop and flight-test control systems that use neural network technology to optimize the performance of the aircraft under nominal conditions as well as stabilize the aircraft under failure conditions. Failure conditions include locked or failed control surfaces as well as unforeseen damage that might occur to the aircraft in flight. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to the baseline aerodynamic derivatives in flight. This set of open-loop flight tests was performed in preparation for a future phase of flights in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed a pitch frequency sweep and an automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. An examination of flight data shows that addition of the flight-identified aerodynamic derivative increments into the simulation improved the pitch handling qualities of the aircraft.
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
Assessing Subjectivity in Sensor Data Post Processing via a Controlled Experiment
NASA Astrophysics Data System (ADS)
Jones, A. S.; Horsburgh, J. S.; Eiriksson, D.
2017-12-01
Environmental data collected by in situ sensors must be reviewed to verify validity, and conducting quality control often requires making edits in post processing to generate approved datasets. This process involves decisions by technicians, data managers, or data users on how to handle problematic data. Options include: removing data from a series, retaining data with annotations, and altering data based on algorithms related to adjacent data points or the patterns of data at other locations or of other variables. Ideally, given the same dataset and the same quality control guidelines, multiple data quality control technicians would make the same decisions in data post processing. However, despite the development and implementation of guidelines aimed to ensure consistent quality control procedures, we have faced ambiguity when performing post processing, and we have noticed inconsistencies in the practices of individuals performing quality control post processing. Technicians with the same level of training and using the same input datasets may produce different results, affecting the overall quality and comparability of finished data products. Different results may also be produced by technicians that do not have the same level of training. In order to assess the effect of subjective decision making by the individual technician on the end data product, we designed an experiment where multiple users performed quality control post processing on the same datasets using a consistent set of guidelines, field notes, and tools. We also assessed the effect of technician experience and training by conducting the same procedures with a group of novices unfamiliar with the data and the quality control process and compared their results to those generated by a group of more experienced technicians. In this presentation, we report our observations of the degree of subjectivity in sensor data post processing, assessing and quantifying the impacts of individual technician as well as technician experience on quality controlled data products.
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
Li, Limin; Xu, Yubin; Soong, Boon-Hee; Ma, Lin
2013-01-01
Vehicular communication platforms that provide real-time access to wireless networks have drawn more and more attention in recent years. IEEE 802.11p is the main radio access technology that supports communication for high mobility terminals, however, due to its limited coverage, IEEE 802.11p is usually deployed by coupling with cellular networks to achieve seamless mobility. In a heterogeneous cellular/802.11p network, vehicular communication is characterized by its short time span in association with a wireless local area network (WLAN). Moreover, for the media access control (MAC) scheme used for WLAN, the network throughput dramatically decreases with increasing user quantity. In response to these compelling problems, we propose a reinforcement sensor (RFS) embedded vertical handoff control strategy to support mobility management. The RFS has online learning capability and can provide optimal handoff decisions in an adaptive fashion without prior knowledge. The algorithm integrates considerations including vehicular mobility, traffic load, handoff latency, and network status. Simulation results verify that the proposed algorithm can adaptively adjust the handoff strategy, allowing users to stay connected to the best network. Furthermore, the algorithm can ensure that RSUs are adequate, thereby guaranteeing a high quality user experience. PMID:24193101
SIMULATION OF AEROSOL DYNAMICS: A COMPARATIVE REVIEW OF ALGORITHMS USED IN AIR QUALITY MODELS
A comparative review of algorithms currently used in air quality models to simulate aerosol dynamics is presented. This review addresses coagulation, condensational growth, nucleation, and gas/particle mass transfer. Two major approaches are used in air quality models to repres...
Delivery of video-on-demand services using local storages within passive optical networks.
Abeywickrama, Sandu; Wong, Elaine
2013-01-28
At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.
Reference-free automatic quality assessment of tracheoesophageal speech.
Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip
2009-01-01
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.
Eller, Achim; Wuest, Wolfgang; Scharf, Michael; Brand, Michael; Achenbach, Stephan; Uder, Michael; Lell, Michael M
2013-12-01
To evaluate an automated attenuation-based kV-selection in computed tomography of the chest in respect to radiation dose and image quality, compared to a standard 120 kV protocol. 104 patients were examined using a 128-slice scanner. Fifty examinations (58 ± 15 years, study group) were performed using the automated adaption of tube potential (100-140 kV), based on the attenuation profile of the scout scan, 54 examinations (62 ± 14 years, control group) with fixed 120 kV. Estimated CT dose index (CTDI) of the software-proposed setting was compared with a 120 kV protocol. After the scan CTDI volume (CTDIvol) and dose length product (DLP) were recorded. Image quality was assessed by region of interest (ROI) measurements, subjective image quality by two observers with a 4-point scale (3--excellent, 0--not diagnostic). The algorithm selected 100 kV in 78% and 120 kV in 22%. Overall CTDIvol reduction was 26.6% (34% in 100 kV) overall DLP reduction was 22.8% (32.1% in 100 kV) (all p<0.001). Subjective image quality was excellent in both groups. The attenuation based kV-selection algorithm enables relevant dose reduction (~27%) in chest-CT while keeping image quality parameters at high levels. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rzonca, A.
2013-12-01
The paper presents the state of the art of quality control of photogrammetric and laser scanning data captured by airborne sensors. The described subject is very important for photogrammetric and LiDAR project execution, because the data quality a prior decides about the final product quality. On the other hand, precise and effective quality control process allows to execute the missions without wide margin of safety, especially in case of the mountain areas projects. For introduction, the author presents theoretical background of the quality control, basing on his own experience, instructions and technical documentation. He describes several variants of organization solutions. Basically, there are two main approaches: quality control of the captured data and the control of discrepancies of the flight plan and its results of its execution. Both of them are able to use test of control and analysis of the data. The test is an automatic algorithm controlling the data and generating the control report. Analysis is a less complicated process, that is based on documentation, data and metadata manual check. The example of quality control system for large area project was presented. The project is being realized periodically for the territory of all Spain and named National Plan of Aerial Orthophotography (Plan Nacional de Ortofotografía Aérea, PNOA). The system of the internal control guarantees its results soon after the flight and informs the flight team of the company. It allows to correct all the errors shortly after the flight and it might stop transferring the data to another team or company, for further data processing. The described system of data quality control contains geometrical and radiometrical control of photogrammetric data and geometrical control of LiDAR data. According to all specified parameters, it checks all of them and generates the reports. They are very helpful in case of some errors or low quality data. The paper includes the author experience in the field of data quality control, presents the conclusions and suggestions of the organization and technical aspects, with a short definition of the necessary control software.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2010-01-01
AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud cover.
Incorporation of quality updates for JPSS CGS Products
NASA Astrophysics Data System (ADS)
Cochran, S.; Grant, K. D.; Ibrahim, W.; Brueske, K. F.; Smit, P.
2016-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Methods and Tools for Product Quality Maintenance in JPSS CGS
NASA Astrophysics Data System (ADS)
Cochran, S.; Smit, P.; Grant, K. D.; Jamilkowski, M. L.
2015-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Advanced methods in NDE using machine learning approaches
NASA Astrophysics Data System (ADS)
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
2018-04-01
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.
De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro
2017-01-01
Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.
Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A
2010-06-01
To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.
Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G
2016-01-01
Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.
Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.
2015-07-01
As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.
ECG compression using non-recursive wavelet transform with quality control
NASA Astrophysics Data System (ADS)
Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching
2016-09-01
While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.
NASA Astrophysics Data System (ADS)
Meier, W.; Stroeve, J.; Duerr, R. E.; Fetterer, F. M.
2009-12-01
The declining Arctic sea ice is one of the most dramatic indicators of climate change and is being recognized as a key factor in future climate impacts on biology, human activities, and global climate change. As such, the audience for sea ice data is expanding well beyond the sea ice community. The most comprehensive sea ice data are from a series of satellite-borne passive microwave sensors. They provide a near-complete daily timeseries of sea ice concentration and extent since late-1978. However, there are many complicating issues in using such data, particularly for novice users. First, there is not one single, definitive algorithm, but several. And even for a given algorithm, different processing and quality-control methods may be used, depending on the source. Second, for all algorithms, there are uncertainties in any retrieved value. In general, these limitations are well-known: low spatial-resolution results in an imprecise ice edge determination and lack of small-scale detail (e.g., lead detection) within the ice pack; surface melt depresses concentration values during summer; thin ice is underestimated in some algorithms; some algorithms are sensitive to physical surface temperature; other surface features (e.g., snow) can influence retrieved data. While general error estimates are available for concentration values, currently the products do not carry grid-cell level or even granule level data quality information. Finally, metadata and data provenance information are limited, both of which are essential for future reprocessing. Here we describe the progress to date toward development of sea ice concentration products and outline the future steps needed to complete a sea ice climate data record.
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
NASA Technical Reports Server (NTRS)
Powell, Bradley W.; Burroughs, Ivan A.
1994-01-01
Through the two phases of this contract, sensors for welding applications and parameter extraction algorithms have been developed. These sensors form the foundation of a weld control system which can provide action weld control through the monitoring of the weld pool and keyhole in a VPPA welding process. Systems of this type offer the potential of quality enhancement and cost reduction (minimization of rework on faulty welds) for high-integrity welding applications. Sensors for preweld and postweld inspection, weld pool monitoring, keyhole/weld wire entry monitoring, and seam tracking were developed. Algorithms for signal extraction were also developed and analyzed to determine their application to an adaptive weld control system. The following sections discuss findings for each of the three sensors developed under this contract: (1) weld profiling sensor; (2) weld pool sensor; and (3) stereo seam tracker/keyhole imaging sensor. Hardened versions of these sensors were designed and built under this contract. A control system, described later, was developed on a multiprocessing/multitasking operating system for maximum power and flexibility. Documentation for sensor mechanical and electrical design is also included as appendices in this report.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Soummer, Rémi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Levecq, Olivier; Mazoyer, Johan; N'Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2017-09-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, such as JWST. With the JWST Science and Operations Center co-located at STScI, JOST was developed to provide both a platform for staff training and to test alternate wavefront sensing and control strategies for independent validation or future improvements beyond the baseline operations. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the most recent experimental results for the segmented mirror alignment. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is tested on simulation and experimentally. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by misalignment of the secondary lens and the segmented mirror, are tested and validated both on simulations and experimentally. In this proceeding, we present the performance of the full active optic control loop in presence of perturbations on the segmented mirror, and we detail the quality of the alignment correction.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-05-08
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.
Detailed design of a Ride Quality Augmentation System for commuter aircraft
NASA Technical Reports Server (NTRS)
Suikat, Reiner; Donaldson, Kent E.; Downing, David R.
1989-01-01
The design of a Ride Quality Augmentation System (RQAS) for commuter aircraft is documented. The RQAS is designed for a Cessna 402B, an 8 passenger prop twin representative to this class of aircraft. The purpose of the RQAS is the reduction of vertical and lateral accelerations of the aircraft due to atmospheric turbulence by the application of active control. The detailed design of the hardware (the aircraft modifications, the Ride Quality Instrumentation System (RQIS), and the required computer software) is examined. The aircraft modifications, consisting of the dedicated control surfaces and the hydraulic actuation system, were designed at Cessna Aircraft by Kansas University-Flight Research Laboratory. The instrumentation system, which consist of the sensor package, the flight computer, a Data Acquisition System, and the pilot and test engineer control panels, was designed by NASA-Langley. The overall system design and the design of the software, both for flight control algorithms and ground system checkout are detailed. The system performance is predicted from linear simulation results and from power spectral densities of the aircraft response to a Dryden gust. The results indicate that both accelerations are possible.
Hierarchical colorant-based direct binary search halftoning.
He, Zhen
2010-07-01
Colorant-based direct binary search (CB-DBS) halftoning proposed in provides an image quality benchmark for dispersed-dot halftoning algorithms. The objective of this paper is to further push the image quality limit. An algorithm called hierarchical colorant-based direct binary search (HCB-DBS) is developed in this paper. By appropriately integrating yellow colorant into dot-overlapping and dot-positioning controls, it is demonstrated that HCB-DBS can achieve better halftone texture of both individual and joint dot-color planes, without compromising the dot distribution of more visible halftone of cyan and magenta colorants. The input color specification is first converted from colorant space to dot-color space with minimum brightness variation principle for full dot-overlapping control. The dot-colors are then split into groups based upon dot visibility. Hierarchical monochrome DBS halftoning is applied to make dot-positioning decision for each group, constrained on the already generated halftone of the groups with higher priority. And dot-coloring is decided recursively with joint monochrome DBS halftoning constrained on the related total dot distribution. Experiments show HCB-DBS improves halftone texture for both individual and joint dot-color planes. And it reduces the halftone graininess and free of color mottle artifacts, comparing to CB-DBS.
An automated workflow for patient-specific quality control of contour propagation
NASA Astrophysics Data System (ADS)
Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel
2016-12-01
Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Mindcontrol: A web application for brain segmentation quality control.
Keshavan, Anisha; Datta, Esha; M McDonough, Ian; Madan, Christopher R; Jordan, Kesshi; Henry, Roland G
2018-04-15
Tissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
QRS detection based ECG quality assessment.
Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter
2012-09-01
Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo
2015-12-01
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.
Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo
2015-12-07
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...
Optimal wavefront estimation of incoherent sources
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler
2014-08-01
Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.
Machine Learning: A Crucial Tool for Sensor Design
Zhao, Weixiang; Bhushan, Abhinav; Santamaria, Anthony D.; Simon, Melinda G.; Davis, Cristina E.
2009-01-01
Sensors have been widely used for disease diagnosis, environmental quality monitoring, food quality control, industrial process analysis and control, and other related fields. As a key tool for sensor data analysis, machine learning is becoming a core part of novel sensor design. Dividing a complete machine learning process into three steps: data pre-treatment, feature extraction and dimension reduction, and system modeling, this paper provides a review of the methods that are widely used for each step. For each method, the principles and the key issues that affect modeling results are discussed. After reviewing the potential problems in machine learning processes, this paper gives a summary of current algorithms in this field and provides some feasible directions for future studies. PMID:20191110
Automatic detection of DNA double strand breaks after irradiation using an γH2AX assay.
Hohmann, Tim; Kessler, Jacqueline; Grabiec, Urszula; Bache, Matthias; Vordermark, Dyrk; Dehghani, Faramarz
2018-05-01
Radiation therapy belongs to the most common approaches for cancer therapy leading amongst others to DNA damage like double strand breaks (DSB). DSB can be used as a marker for the effect of radiation on cells. For visualization and assessing the extent of DNA damage the γH2AX foci assay is frequently used. The analysis of the γH2AX foci assay remains complicated as the number of γH2AX foci has to be counted. The quantification is mostly done manually, being time consuming and leading to person-dependent variations. Therefore, we present a method to automatically analyze the number of foci inside nuclei, facilitating and quickening the analysis of DSBs with high reliability in fluorescent images. First nuclei were detected in fluorescent images. Afterwards, the nuclei were analyzed independently from each other with a local thresholding algorithm. This approach allowed accounting for different levels of noise and detection of the foci inside the respective nucleus, using Hough transformation searching for circles. The presented algorithm was able to correctly classify most foci in cases of "high" and "average" image quality (sensitivity>0.8) with a low rate of false positive detections (positive predictive value (PPV)>0.98). In cases of "low" image quality the approach had a decreased sensitivity (0.7-0.9), depending on the manual control counter. The PPV remained high (PPV>0.91). Compared to other automatic approaches the presented algorithm had a higher sensitivity and PPV. The used automatic foci detection algorithm was capable of detecting foci with high sensitivity and PPV. Thus it can be used for automatic analysis of images of varying quality.
NASA Astrophysics Data System (ADS)
Kumar, Rishi; Mevada, N. Ramesh; Rathore, Santosh; Agarwal, Nitin; Rajput, Vinod; Sinh Barad, AjayPal
2017-08-01
To improve Welding quality of aluminum (Al) plate, the TIG Welding system has been prepared, by which Welding current, Shielding gas flow rate and Current polarity can be controlled during Welding process. In the present work, an attempt has been made to study the effect of Welding current, current polarity, and shielding gas flow rate on the tensile strength of the weld joint. Based on the number of parameters and their levels, the Response Surface Methodology technique has been selected as the Design of Experiment. For understanding the influence of input parameters on Ultimate tensile strength of weldment, ANOVA analysis has been carried out. Also to describe and optimize TIG Welding using a new metaheuristic Nature - inspired algorithm which is called as Firefly algorithm which was developed by Dr. Xin-She Yang at Cambridge University in 2007. A general formulation of firefly algorithm is presented together with an analytical, mathematical modeling to optimize the TIG Welding process by a single equivalent objective function.
Congestion Pricing for Aircraft Pushback Slot Allocation.
Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.
Path planning on satellite images for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Joe-Ming; Tseng, Chien-Ming; Tseng, P. S.
2015-01-01
In recent years, the development of autonomous surface vehicles has been a field of increasing research interest. There are two major areas in this field: control theory and path planning. This study focuses on path planning, and two objectives are discussed: path planning for Unmanned Surface Vehicles (USVs) and implementation of path planning in a real map. In this paper, satellite thermal images are converted into binary images which are used as the maps for the Finite Angle A* algorithm (FAA*), an advanced A* algorithm that is used to determine safer and suboptimal paths for USVs. To plan a collision-free path, the algorithm proposed in this article considers the dimensions of surface vehicles. Furthermore, the turning ability of a surface vehicle is also considered, and a constraint condition is introduced to improve the quality of the path planning algorithm, which makes the traveled path smoother. This study also shows a path planning experiment performed on a real satellite thermal image, and the path planning results can be used by an USV.
Congestion Pricing for Aircraft Pushback Slot Allocation
Zhang, Yaping
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo
2011-01-01
Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
Cos, Oriol; Ramon, Ramon; Montesinos, José Luis; Valero, Francisco
2006-09-05
A predictive control algorithm coupled with a PI feedback controller has been satisfactorily implemented in the heterologous Rhizopus oryzae lipase production by Pichia pastoris methanol utilization slow (Mut(s)) phenotype. This control algorithm has allowed the study of the effect of methanol concentration, ranging from 0.5 to 1.75 g/L, on heterologous protein production. The maximal lipolytic activity (490 UA/mL), specific yield (11,236 UA/g(biomass)), productivity (4,901 UA/L . h), and specific productivity (112 UA/g(biomass)h were reached for a methanol concentration of 1 g/L. These parameters are almost double than those obtained with a manual control at a similar methanol set-point. The study of the specific growth, consumption, and production rates showed different patterns for these rates depending on the methanol concentration set-point. Results obtained have shown the need of implementing a robust control scheme when reproducible quality and productivity are sought. It has been demonstrated that the model-based control proposed here is a very efficient, robust, and easy-to-implement strategy from an industrial application point of view. (c) 2006 Wiley Periodicals, Inc.
Blind prediction of natural video quality.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2014-03-01
We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
High-Performance Integrated Control of water quality and quantity in urban water reservoirs
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.; Goedbloed, A.
2015-11-01
This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
A permutation-based non-parametric analysis of CRISPR screen data.
Jia, Gaoxiang; Wang, Xinlei; Xiao, Guanghua
2017-07-19
Clustered regularly-interspaced short palindromic repeats (CRISPR) screens are usually implemented in cultured cells to identify genes with critical functions. Although several methods have been developed or adapted to analyze CRISPR screening data, no single specific algorithm has gained popularity. Thus, rigorous procedures are needed to overcome the shortcomings of existing algorithms. We developed a Permutation-Based Non-Parametric Analysis (PBNPA) algorithm, which computes p-values at the gene level by permuting sgRNA labels, and thus it avoids restrictive distributional assumptions. Although PBNPA is designed to analyze CRISPR data, it can also be applied to analyze genetic screens implemented with siRNAs or shRNAs and drug screens. We compared the performance of PBNPA with competing methods on simulated data as well as on real data. PBNPA outperformed recent methods designed for CRISPR screen analysis, as well as methods used for analyzing other functional genomics screens, in terms of Receiver Operating Characteristics (ROC) curves and False Discovery Rate (FDR) control for simulated data under various settings. Remarkably, the PBNPA algorithm showed better consistency and FDR control on published real data as well. PBNPA yields more consistent and reliable results than its competitors, especially when the data quality is low. R package of PBNPA is available at: https://cran.r-project.org/web/packages/PBNPA/ .
Kirschstein, Timo; Wolters, Alexander; Lenz, Jan-Hendrik; Fröhlich, Susanne; Hakenberg, Oliver; Kundt, Günther; Darmüntzel, Martin; Hecker, Michael; Altiner, Attila; Müller-Hilke, Brigitte
2016-01-01
The amendment of the Medical Licensing Act (ÄAppO) in Germany in 2002 led to the introduction of graded assessments in the clinical part of medical studies. This, in turn, lent new weight to the importance of written tests, even though the minimum requirements for exam quality are sometimes difficult to reach. Introducing exam quality as a criterion for the award of performance-based allocation of funds is expected to steer the attention of faculty members towards more quality and perpetuate higher standards. However, at present there is a lack of suitable algorithms for calculating exam quality. In the spring of 2014, the students' dean commissioned the "core group" for curricular improvement at the University Medical Center in Rostock to revise the criteria for the allocation of performance-based funds for teaching. In a first approach, we developed an algorithm that was based on the results of the most common type of exam in medical education, multiple choice tests. It included item difficulty and discrimination, reliability as well as the distribution of grades achieved. This algorithm quantitatively describes exam quality of multiple choice exams. However, it can also be applied to exams involving short assay questions and the OSCE. It thus allows for the quantitation of exam quality in the various subjects and - in analogy to impact factors and third party grants - a ranking among faculty. Our algorithm can be applied to all test formats in which item difficulty, the discriminatory power of the individual items, reliability of the exam and the distribution of grades are measured. Even though the content validity of an exam is not considered here, we believe that our algorithm is suitable as a general basis for performance-based allocation of funds.
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Guidelines to implement quality management systems in microbiology laboratories for tissue banking.
Vicentino, W; Rodríguez, G; Saldías, M; Alvarez, I
2009-10-01
Human tissues for implants are a biomedical product that is being used more frequently by many medical disciplines. There are infections in the patients related to the implanted tissues. The early detection of infections transmitted by blood and the microbiological study of tissues before their clinical use are strategies in tissue banks to prevent these situations. This work sought to contribute to establish the bases for the operation of a laboratory applied to the microbiological quality control of tissues. Based on classical microbiological principles, we defined the operation of microbiological control and tissues sterilization since 2003. We determine lists of acceptable microorganisms for every tissue, criteria for the interpretation of results, and a diagnostic algorithm of microbiological quality. We observed that the circumstances of donor death can be a determinant of the quality. The environment and the operator should be investigated as probable sources of contamination in outbreaks. The criteria of work based on a solid methodology must help to avoid the transmission of infections between donor and recipient. This is a critical point in the quality management of a tissue bank.
Panta, Sandeep R; Wang, Runtang; Fries, Jill; Kalyanam, Ravi; Speer, Nicole; Banich, Marie; Kiehl, Kent; King, Margaret; Milham, Michael; Wager, Tor D; Turner, Jessica A; Plis, Sergey M; Calhoun, Vince D
2016-01-01
In this paper we propose a web-based approach for quick visualization of big data from brain magnetic resonance imaging (MRI) scans using a combination of an automated image capture and processing system, nonlinear embedding, and interactive data visualization tools. We draw upon thousands of MRI scans captured via the COllaborative Imaging and Neuroinformatics Suite (COINS). We then interface the output of several analysis pipelines based on structural and functional data to a t-distributed stochastic neighbor embedding (t-SNE) algorithm which reduces the number of dimensions for each scan in the input data set to two dimensions while preserving the local structure of data sets. Finally, we interactively display the output of this approach via a web-page, based on data driven documents (D3) JavaScript library. Two distinct approaches were used to visualize the data. In the first approach, we computed multiple quality control (QC) values from pre-processed data, which were used as inputs to the t-SNE algorithm. This approach helps in assessing the quality of each data set relative to others. In the second case, computed variables of interest (e.g., brain volume or voxel values from segmented gray matter images) were used as inputs to the t-SNE algorithm. This approach helps in identifying interesting patterns in the data sets. We demonstrate these approaches using multiple examples from over 10,000 data sets including (1) quality control measures calculated from phantom data over time, (2) quality control data from human functional MRI data across various studies, scanners, sites, (3) volumetric and density measures from human structural MRI data across various studies, scanners and sites. Results from (1) and (2) show the potential of our approach to combine t-SNE data reduction with interactive color coding of variables of interest to quickly identify visually unique clusters of data (i.e., data sets with poor QC, clustering of data by site) quickly. Results from (3) demonstrate interesting patterns of gray matter and volume, and evaluate how they map onto variables including scanners, age, and gender. In sum, the proposed approach allows researchers to rapidly identify and extract meaningful information from big data sets. Such tools are becoming increasingly important as datasets grow larger.
Mueller, David S.
2016-06-21
The software program, QRev applies common and consistent computational algorithms combined with automated filtering and quality assessment of the data to improve the quality and efficiency of streamflow measurements and helps ensure that U.S. Geological Survey streamflow measurements are consistent, accurate, and independent of the manufacturer of the instrument used to make the measurement. Software from different manufacturers uses different algorithms for various aspects of the data processing and discharge computation. The algorithms used by QRev to filter data, interpolate data, and compute discharge are documented and compared to the algorithms used in the manufacturers’ software. QRev applies consistent algorithms and creates a data structure that is independent of the data source. QRev saves an extensible markup language (XML) file that can be imported into databases or electronic field notes software. This report is the technical manual for version 2.8 of QRev.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Evaluating methods for controlling depth perception in stereoscopic cinematography
NASA Astrophysics Data System (ADS)
Sun, Geng; Holliman, Nick
2009-02-01
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Video quality assessment using a statistical model of human visual speed perception.
Wang, Zhou; Li, Qiang
2007-12-01
Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.
Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph
2017-11-01
The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p < 0.01 for all). All iMAR reconstructed images showed significantly lower artefacts (p < 0.01) compared with the WFPB while there was no significant difference between the iMAR algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p < 0.01). All three iMAR algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.
Control Systems with Pulse Width Modulation in Matrix Converters
NASA Astrophysics Data System (ADS)
Bondarev, A. V.; Fedorov, S. V.; Muravyova, E. A.
2018-03-01
In this article, the matrix frequency converter for the system of the frequency control of the electric drive is considered. Algorithms of formation of an output signal on the basis of pulse width modulation were developed for the quantitative analysis of quality of an output signal on the basis of mathematical models. On the basis of simulation models of an output signal, assessment of quality of this signal was carried out. The analysis of harmonic composition of the voltage output received on the basis of pulse width modulation was made for the purpose of determination of opportunities of the control system for improving harmonic composition. The result of such analysis led to the fact that the device formation of switching functions of the control system on the basis of PWM does not lead to a distortion reduction of a harmonic of the control signal, and leads to offset of harmonic in the field of frequencies, the multiple relatively carrier frequency.
Efficient iterative image reconstruction algorithm for dedicated breast CT
NASA Astrophysics Data System (ADS)
Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan
2016-03-01
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Statistical efficiency of adaptive algorithms.
Widrow, Bernard; Kamenetsky, Max
2003-01-01
The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Woodard, LeChauncy D.; Landrum, Cassie R.; Urech, Tracy H.; Profit, Jochen; Virani, Salim S.; Petersen, Laura A.
2012-01-01
Background/Objectives To validly assess quality-of-care differences among providers, performance measurement programs must reliably identify and exclude patients for whom the quality indicator may not be desirable, including those with limited life expectancy. We developed an algorithm to identify patients with limited life expectancy and examined the impact of limited life expectancy on glycemic control and treatment intensification among diabetic patients. Design We identified diabetic patients with coexisting congestive heart failure, chronic obstructive pulmonary disease, dementia, end-stage liver disease, and/or primary/metastatic cancers with limited life expectancy. To validate our algorithm, we assessed 5-year mortality among patients identified as having limited life expectancy. We compared rates of meeting performance measures for glycemic control between patients with and without limited life expectancy. Among uncontrolled patients, we examined the impact of limited life expectancy on treatment intensification within 90 days. Setting 110 Veterans Administration facilities; October 2006 – September 2007 Participants 888,628 diabetic patients Measurements Hemoglobin A1c (HbA1c) <9%; treatment intensification within 90 days Results 29,016 (3%) patients had limited life expectancy. Adjusting for age, 5-year mortality was 5 times higher among patients with limited life expectancy than those without. Patients with limited life expectancy had poorer glycemic control (77.1% vs. 78.1%) and less frequent treatment intensification (20.9% vs. 28.6%) than patients without, even after controlling for patient-level characteristics (odds ratio [OR]=0.84; 95% confidence interval [CI]=0.81-0.86 and OR=0.71; 95% CI=0.67-0.76, respectively). Conclusion Patients with limited life expectancy were slightly, but significantly less likely than those without to have HbA1c levels controlled and to receive treatment intensification, suggesting that providers treat these patients less aggressively. Quality measurement and performance-based reimbursement systems should acknowledge the different needs of this population. PMID:22260627
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A
2004-01-01
We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.
Image quality classification for DR screening using deep learning.
FengLi Yu; Jing Sun; Annan Li; Jun Cheng; Cheng Wan; Jiang Liu
2017-07-01
The quality of input images significantly affects the outcome of automated diabetic retinopathy (DR) screening systems. Unlike the previous methods that only consider simple low-level features such as hand-crafted geometric and structural features, in this paper we propose a novel method for retinal image quality classification (IQC) that performs computational algorithms imitating the working of the human visual system. The proposed algorithm combines unsupervised features from saliency map and supervised features coming from convolutional neural networks (CNN), which are fed to an SVM to automatically detect high quality vs poor quality retinal fundus images. We demonstrate the superior performance of our proposed algorithm on a large retinal fundus image dataset and the method could achieve higher accuracy than other methods. Although retinal images are used in this study, the methodology is applicable to the image quality assessment and enhancement of other types of medical images.
[Motion control of moving mirror based on fixed-mirror adjustment in FTIR spectrometer].
Li, Zhong-bing; Xu, Xian-ze; Le, Yi; Xu, Feng-qiu; Li, Jun-wei
2012-08-01
The performance of the uniform motion of the moving mirror, which is the only constant motion part in FTIR spectrometer, and the performance of the alignment of the fixed mirror play a key role in FTIR spectrometer, and affect the interference effect and the quality of the spectrogram and may restrict the precision and resolution of the instrument directly. The present article focuses on the research on the uniform motion of the moving mirror and the alignment of the fixed mirror. In order to improve the FTIR spectrometer, the maglev support system was designed for the moving mirror and the phase detection technology was adopted to adjust the tilt angle between the moving mirror and the fixed mirror. This paper also introduces an improved fuzzy PID control algorithm to get the accurate speed of the moving mirror and realize the control strategy from both hardware design and algorithm. The results show that the development of the moving mirror motion control system gets sufficient accuracy and real-time, which can ensure the uniform motion of the moving mirror and the alignment of the fixed mirror.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Flash LIDAR Emulator for HIL Simulation
NASA Technical Reports Server (NTRS)
Brewster, Paul F.
2010-01-01
NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is building a system for detecting hazards and automatically landing controlled vehicles safely anywhere on the Moon. The Flash Light Detection And Ranging (LIDAR) sensor is used to create on-the-fly a 3D map of the unknown terrain for hazard detection. As part of the ALHAT project, a hardware-in-the-loop (HIL) simulation testbed was developed to test the data processing, guidance, and navigation algorithms in real-time to prove their feasibility for flight. Replacing the Flash LIDAR camera with an emulator in the testbed provided a cheaper, safer, more feasible way to test the algorithms in a controlled environment. This emulator must have the same hardware interfaces as the LIDAR camera, have the same performance characteristics, and produce images similar in quality to the camera. This presentation describes the issues involved and the techniques used to create a real-time flash LIDAR emulator to support HIL simulation.
DISCRETE VOLUME-ELEMENT METHOD FOR NETWORK WATER- QUALITY MODELS
An explicit dynamic water-quality modeling algorithm is developed for tracking dissolved substances in water-distribution networks. The algorithm is based on a mass-balance relation within pipes that considers both advective transport and reaction kinetics. Complete mixing of m...
NASA Astrophysics Data System (ADS)
Zhang, Guozhi; Petrov, Dimitar; Marshall, Nicholas; Bosmans, Hilde
2017-03-01
Digital breast tomosynthesis (DBT) is a relatively new diagnostic imaging modality for women. Currently, various models of DBT systems are available on the market and the number of installations is rapidly increasing. EUREF, the European Reference Organization for Quality Assured Breast Screening and Diagnostic Services, has proposed a preliminary Guideline - protocol for the quality control of the physical and technical aspects of digital breast tomosynthesis systems, with an ultimate aim of providing limiting values guaranteeing proper performance for different applications of DBT. In this work, we introduce an adaptive toolkit developed in accordance with this guideline to facilitate the process of image quality evaluation in DBT performance test. This toolkit implements robust algorithms to quantify various technical parameters of DBT images and provides a convenient user interface in practice. Each test is built into a separate module with configurations set corresponding to the European guideline, which can be easily adapted to different settings and extended with additional tests. This toolkit largely improves the efficiency for image quality evaluation of DBT. It is also going to evolve with the development of protocols in quality control of DBT systems.
Segmentation of financial seals and its implementation on a DSP-based system
NASA Astrophysics Data System (ADS)
He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao
2009-11-01
Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.
NASA Astrophysics Data System (ADS)
Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys
2006-03-01
Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.
Takahashi, Yoshiaki; Seki, Hirokazu
2009-01-01
This paper proposes a novel regenerative braking control system of electric wheelchairs for senior citizen. "Electric powered wheelchair", which generates the driving force by electric motors according to the human operation, is expected to be widely used as a mobility support system for elderly people. This study focuses on the braking control to realize the safety and smooth stopping motion using the regenerative braking control technique based on fuzzy algorithm. The ride quality improvement and energy recycling can be expected by the proposed control system with stopping distance estimation and variable frequency control on the step-up/down chopper type of capacitor regenerative circuit. Some driving experiments confirm the effectiveness of the proposed control system.
Tactical Approaches for Making a Successful Satellite Passive Microwave ESDR
NASA Astrophysics Data System (ADS)
Hardman, M.; Brodzik, M. J.; Gotberg, J.; Long, D. G.; Paget, A. C.
2014-12-01
Our NASA MEaSUREs project is producing a new, enhanced resolution gridded Earth System Data Record for the entire satellite passive microwave (SMMR, SSM/I-SSMIS and AMSR-E) time series. Our project goals are twofold: to produce a well-documented, consistently processed, high-quality historical record at higher spatial resolutions than have previously been available, and to transition the production software to the NSIDC DAAC for ongoing processing after our project completion. In support of these goals, our distributed team at BYU and NSIDC faces project coordination challenges to produce a high-quality data set that our user community will accept as a replacement for the currently available historical versions of these data. We work closely with our DAAC liaison on format specifications, data and metadata plans, and project progress. In order for the user community to understand and support our project, we have solicited a team of Early Adopters who are reviewing and evaluating a prototype version of the data. Early Adopter feedback will be critical input to our final data content and format decisions. For algorithm transparency and accountability, we have released an Algorithm Theoretical Basis Document (ATBD) and detailed supporting technical documentation, with rationale for all algorithm implementation decisions. For distributed team management, we are using collaborative tools for software revision control and issue tracking. For reliably transitioning a research-quality image reconstruction software system to production-quality software suitable for use at the DAAC, we have adopted continuous integration methods for running automated regression testing. Our presentation will summarize bothadvantages and challenges of each of these tactics in ensuring production of a successful ESDR and an enduring production software system.
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
Prediction of pork quality parameters by applying fractals and data mining on MRI.
Caballero, Daniel; Pérez-Palacios, Trinidad; Caro, Andrés; Amigo, José Manuel; Dahl, Anders B; ErsbØll, Bjarne K; Antequera, Teresa
2017-09-01
This work firstly investigates the use of MRI, fractal algorithms and data mining techniques to determine pork quality parameters non-destructively. The main objective was to evaluate the capability of fractal algorithms (Classical Fractal algorithm, CFA; Fractal Texture Algorithm, FTA and One Point Fractal Texture Algorithm, OPFTA) to analyse MRI in order to predict quality parameters of loin. In addition, the effect of the sequence acquisition of MRI (Gradient echo, GE; Spin echo, SE and Turbo 3D, T3D) and the predictive technique of data mining (Isotonic regression, IR and Multiple linear regression, MLR) were analysed. Both fractal algorithm, FTA and OPFTA are appropriate to analyse MRI of loins. The sequence acquisition, the fractal algorithm and the data mining technique seems to influence on the prediction results. For most physico-chemical parameters, prediction equations with moderate to excellent correlation coefficients were achieved by using the following combinations of acquisition sequences of MRI, fractal algorithms and data mining techniques: SE-FTA-MLR, SE-OPFTA-IR, GE-OPFTA-MLR, SE-OPFTA-MLR, with the last one offering the best prediction results. Thus, SE-OPFTA-MLR could be proposed as an alternative technique to determine physico-chemical traits of fresh and dry-cured loins in a non-destructive way with high accuracy. Copyright © 2017. Published by Elsevier Ltd.
Colonoscopy video quality assessment using hidden Markov random fields
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.
Stinner, B; Bauhofer, A; Lorenz, W; Rothmund, M; Plaul, U; Torossian, A; Celik, I; Sitter, H; Koller, M; Black, A; Duda, D; Encke, A; Greger, B; van Goor, H; Hanisch, E; Hesterberg, R; Klose, K J; Lacaine, F; Lorijn, R H; Margolis, C; Neugebauer, E; Nyström, P O; Reemst, P H; Schein, M; Solovera, J
2001-05-01
Presentation of a new type of a study protocol for evaluation of the effectiveness of an immune modifier (rhG-CSF, filgrastim): prevention of postoperative infectious complications and of sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). A randomised, placebo controlled, double-blinded, single-centre study is performed at an University Hospital (n = 40 patients for each group). This part presents the course of the individual patient and a complication algorithm for the management of anastomotic leakage and quality management. In part three of the protocol, the three major sections include: The course of the individual patient using a comprehensive graphic display, including the perioperative period, hospital stay and post discharge outcome. A center based clinical practice guideline for the management of the most important postoperative complication--anastomotic leakage--including evidence based support for each step of the algorithm. Data management, ethics and organisational structure. Future studies with immune modifiers will also fail if not better structured (reduction of variance) to achieve uniform patient management in a complex clinical scenario. This new type of a single-centre trial aims to reduce the gap between animal experiments and clinical trials or--if it fails--at least demonstrates new ways for explaining the failures.
North, Frederick; Varkey, Prathiba; Caraballo, Pedro; Vsetecka, Darlene; Bartel, Greg
2007-10-11
Complex decision support software can require significant effort in maintenance and enhancement. A quality improvement tool, the prioritization matrix, was successfully used to guide software enhancement of algorithms in a symptom assessment call center.
Synthesis of the unmanned aerial vehicle remote control augmentation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomczyk, Andrzej, E-mail: A.Tomczyk@prz.edu.pl
Medium size Unmanned Aerial Vehicle (UAV) usually flies as an autonomous aircraft including automatic take-off and landing phases. However in the case of the on-board control system failure, the remote steering is using as an emergency procedure. In this reason, remote manual control of unmanned aerial vehicle is used more often during take-of and landing phases. Depends on UAV take-off mass and speed (total energy) the potential crash can be very danger for airplane and environment. So, handling qualities of UAV is important from pilot-operator point of view. In many cases the dynamic properties of remote controlling UAV are notmore » suitable for obtaining the desired properties of the handling qualities. In this case the control augmentation system (CAS) should be applied. Because the potential failure of the on-board control system, the better solution is that the CAS algorithms are placed on the ground station computers. The method of UAV handling qualities shaping in the case of basic control system failure is presented in this paper. The main idea of this method is that UAV reaction on the operator steering signals should be similar - almost the same - as reaction of the 'ideal' remote control aircraft. The model following method was used for controller parameters calculations. The numerical example concerns the medium size MP-02A UAV applied as an aerial observer system.« less
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
Protective Controller against Cascade Outages with Selective Harmonic Compensation Function
NASA Astrophysics Data System (ADS)
Abramovich, B. N.; Kuznetsov, P. A.; Sychev, Yu A.
2018-05-01
The paper presents data on the power quality and development of protective devices for the power networks with distributed generation (DG).The research has shown that power quality requirements for DG networks differ from conventional ones. That is why main tendencies, protective equipment and filters should be modified. There isa developed algorithm for detection and prevention of cascade outages that can lead to the blackoutin DG networks and there was a proposed structural scheme for a new active power filter for selective harmonics compensation. Analysis of these theories and equipment led to the development of protective device that could monitor power balance and cut off non-important consumers. The last part of the article describes a microcontroller prototype developed for connection to the existing power station control center.
Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2008-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AlRS data. Version 5 contains accurate case-by-case error estimates for most derived products, which are also used for quality control. We have conducted forecast impact experiments assimilating AlRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM. Assimilation of quality controlled temperature profiles resulted in significantly improved forecast skill in both the Northern Hemisphere and Southern Hemisphere Extra-Tropics, compared to that obtained from analyses obtained when all data used operationally by NCEP except for AlRS data is assimilated. Experiments using different Quality Control thresholds for assimilation of AlRS temperature retrievals showed that a medium quality control threshold performed better than a tighter threshold, which provided better overall sounding accuracy; or a looser threshold, which provided better spatial coverage of accepted soundings. We are conducting more experiments to further optimize this balance of spatial coverage and sounding accuracy from the data assimilation perspective. In all cases, temperature soundings were assimilated well below cloud level in partially cloudy cases. The positive impact of assimilating AlRS derived atmospheric temperatures all but vanished when only AIRS stratospheric temperatures were assimilated. Forecast skill resulting from assimilation of AlRS radiances uncontaminated by clouds, instead of AlRS temperature soundings, was only slightly better than that resulting from assimilation of only stratospheric AlRS temperatures. This reduction in forecast skill is most likely the result of significant loss of tropospheric information when only AIRS radiances unaffected by clouds are used in the data assimilation process.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Titova, I. A.; Barkalov, S. A.
2018-03-01
The article presents an algorithm for obtaining an integral assessment of the quality of an organization from the perspective of customers, based on the method of aggregating linguistic information on a multilevel hierarchical system of quality assessment. The algorithm is of a constructive nature, it provides not only the possibility of obtaining an integral evaluation, but also the development of a quality improvement strategy based on the method of linguistic decomposition, which forms the minimum set of areas of work with clients whose quality change will allow obtaining the required level of integrated quality assessment.
Fast mapping algorithm of lighting spectrum and GPS coordinates for a large area
NASA Astrophysics Data System (ADS)
Lin, Chih-Wei; Hsu, Ke-Fang; Hwang, Jung-Min
2016-09-01
In this study, we propose a fast rebuild technology for evaluating light quality in large areas. Outdoor light quality, which is measured by illuminance uniformity and the color rendering index, is difficult to conform after improvement. We develop an algorithm for a lighting quality mapping system and coordinates using a micro spectrometer and GPS tracker integrated with a quadcopter or unmanned aerial vehicle. After cruising at a constant altitude, lighting quality data is transmitted and immediately mapped to evaluate the light quality in a large area.
Walsh, Timothy S; Kydonaki, Kalliopi; Lee, Robert J; Everingham, Kirsty; Antonelli, Jean; Harkness, Ronald T; Cole, Stephen; Quasim, Tara; Ruddy, James; McDougall, Marcia; Davidson, Alan; Rutherford, John; Richards, Jonathan; Weir, Christopher J
2016-03-01
To develop sedation, pain, and agitation quality measures using process control methodology and evaluate their properties in clinical practice. A Sedation Quality Assessment Tool was developed and validated to capture data for 12-hour periods of nursing care. Domains included pain/discomfort and sedation-agitation behaviors; sedative, analgesic, and neuromuscular blocking drug administration; ventilation status; and conditions potentially justifying deep sedation. Predefined sedation-related adverse events were recorded daily. Using an iterative process, algorithms were developed to describe the proportion of care periods with poor limb relaxation, poor ventilator synchronization, unnecessary deep sedation, agitation, and an overall optimum sedation metric. Proportion charts described processes over time (2 monthly intervals) for each ICU. The numbers of patients treated between sedation-related adverse events were described with G charts. Automated algorithms generated charts for 12 months of sequential data. Mean values for each process were calculated, and variation within and between ICUs explored qualitatively. Eight Scottish ICUs over a 12-month period. Mechanically ventilated patients. None. The Sedation Quality Assessment Tool agitation-sedation domains correlated with the Richmond Sedation Agitation Scale score (Spearman ρ = 0.75) and were reliable in clinician-clinician (weighted kappa; κ = 0.66) and clinician-researcher (κ = 0.82) comparisons. The limb movement domain had fair correlation with Behavioral Pain Scale (ρ = 0.24) and was reliable in clinician-clinician (κ = 0.58) and clinician-researcher (κ = 0.45) comparisons. Ventilator synchronization correlated with Behavioral Pain Scale (ρ = 0.54), and reliability in clinician-clinician (κ = 0.29) and clinician-researcher (κ = 0.42) comparisons was fair-moderate. Eight hundred twenty-five patients were enrolled (range, 59-235 across ICUs), providing 12,385 care periods for evaluation (range 655-3,481 across ICUs). The mean proportion of care periods with each quality metric varied between ICUs: excessive sedation 12-38%; agitation 4-17%; poor relaxation 13-21%; poor ventilator synchronization 8-17%; and overall optimum sedation 45-70%. Mean adverse event intervals ranged from 1.5 to 10.3 patients treated. The quality measures appeared relatively stable during the observation period. Process control methodology can be used to simultaneously monitor multiple aspects of pain-sedation-agitation management within ICUs. Variation within and between ICUs could be used as triggers to explore practice variation, improve quality, and monitor this over time.
Performance seeking control excitation mode
NASA Technical Reports Server (NTRS)
Schkolnik, Gerard
1995-01-01
Flight testing of the performance seeking control (PSC) excitation mode was successfully completed at NASA Dryden on the F-15 highly integrated digital electronic control (HIDEC) aircraft. Although the excitation mode was not one of the original objectives of the PSC program, it was rapidly prototyped and implemented into the architecture of the PSC algorithm, allowing valuable and timely research data to be gathered. The primary flight test objective was to investigate the feasibility of a future measurement-based performance optimization algorithm. This future algorithm, called AdAPT, which stands for adaptive aircraft performance technology, generates and applies excitation inputs to selected control effectors. Fourier transformations are used to convert measured response and control effector data into frequency domain models which are mapped into state space models using multiterm frequency matching. Formal optimization principles are applied to produce an integrated, performance optimal effector suite. The key technical challenge of the measurement-based approach is the identification of the gradient of the performance index to the selected control effector. This concern was addressed by the excitation mode flight test. The AdAPT feasibility study utilized the PSC excitation mode to apply separate sinusoidal excitation trims to the controls - one aircraft, inlet first ramp (cowl), and one engine, throat area. Aircraft control and response data were recorded using on-board instrumentation and analyzed post-flight. Sensor noise characteristics, axial acceleration performance gradients, and repeatability were determined. Results were compared to pilot comments to assess the ride quality. Flight test results indicate that performance gradients were identified at all flight conditions, sensor noise levels were acceptable at the frequencies of interest, and excitations were generally not sensed by the pilot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tung, Chuan-Jong; Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan; Yu, Pei-Chieh
2010-01-01
During radiotherapy treatments, quality assurance/control is essential, particularly dose delivery to patients. This study was designed to verify midline doses with diode in vivo dosimetry. Dosimetry was studied for 6-MV bilateral fields in head and neck cancer treatments and 10-MV bilateral and anteroposterior/posteroanterior (AP/PA) fields in pelvic cancer treatments. Calibrations with corrections of diodes were performed using plastic water phantoms; 190 and 100 portals were studied for head and neck and pelvis treatments, respectively. Calculations of midline doses were made using the midline transmission, arithmetic mean, and geometric mean algorithms. These midline doses were compared with the treatment planning systemmore » target doses for lateral or AP (PA) portals and paired opposed portals. For head and neck treatments, all 3 algorithms were satisfactory, although the geometric mean algorithm was less accurate and more uncertain. For pelvis treatments, the arithmetic mean algorithm seemed unacceptable, whereas the other algorithms were satisfactory. The random error was reduced by using averaged midline doses of paired opposed portals because the asymmetric effect was averaged out. Considering the simplicity of in vivo dosimetry, the arithmetic mean and geometric mean algorithm should be adopted for head/neck and pelvis treatments, respectively.« less
Experimental Optimization of Exposure Index and Quality of Service in Wlan Networks.
Plets, David; Vermeeren, Günter; Poorter, Eli De; Moerman, Ingrid; Goudos, Sotirios K; Luc, Martens; Wout, Joseph
2017-07-01
This paper presents the first real-life optimization of the Exposure Index (EI). A genetic optimization algorithm is developed and applied to three real-life Wireless Local Area Network scenarios in an experimental testbed. The optimization accounts for downlink, uplink and uplink of other users, for realistic duty cycles, and ensures a sufficient Quality of Service to all users. EI reductions up to 97.5% compared to a reference configuration can be achieved in a downlink-only scenario, in combination with an improved Quality of Service. Due to the dominance of uplink exposure and the lack of WiFi power control, no optimizations are possible in scenarios that also consider uplink traffic. However, future deployments that do implement WiFi power control can be successfully optimized, with EI reductions up to 86% compared to a reference configuration and an EI that is 278 times lower than optimized configurations under the absence of power control. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Self-tuning multivariable pole placement control of a multizone crystal growth furnace
NASA Technical Reports Server (NTRS)
Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.
1992-01-01
This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.
MERIS Retrieval of Water Quality Components in the Turbid Albemarle-Pamlico Sound Estuary, USA
Two remote-sensing optical algorithms for the retrieval of the water quality components (WQCs) in the Albemarle-Pamlico Estuarine System (APES) have been developed and validated for chlorophyll a (Chl) concentration. Both algorithms are semiempirical because they incorporate some...
Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin
2017-01-01
Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Comiso, Josefino C.; Fraser, Robert S.; Firestone, James K.; Schieber, Brian D.; Yeh, Eueng-Nan; Arrigo, Kevin R.; Sullivan, Cornelius W.
1994-01-01
Although the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Calibration and Validation Program relies on the scientific community for the collection of bio-optical and atmospheric correction data as well as for algorithm development, it does have the responsibility for evaluating and comparing the algorithms and for ensuring that the algorithms are properly implemented within the SeaWiFS Data Processing System. This report consists of a series of sensitivity and algorithm (bio-optical, atmospheric correction, and quality control) studies based on Coastal Zone Color Scanner (CZCS) and historical ancillary data undertaken to assist in the development of SeaWiFS specific applications needed for the proper execution of that responsibility. The topics presented are as follows: (1) CZCS bio-optical algorithm comparison, (2) SeaWiFS ozone data analysis study, (3) SeaWiFS pressure and oxygen absorption study, (4) pixel-by-pixel pressure and ozone correction study for ocean color imagery, (5) CZCS overlapping scenes study, (6) a comparison of CZCS and in situ pigment concentrations in the Southern Ocean, (7) the generation of ancillary data climatologies, (8) CZCS sensor ringing mask comparison, and (9) sun glint flag sensitivity study.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite
NASA Astrophysics Data System (ADS)
Saussie, David Alexandre
2010-03-01
Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.
Huang, Shuqiang; Tao, Ming
2017-01-22
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.
Reducing false asystole alarms in intensive care.
Dekimpe, Remi; Heldt, Thomas
2017-07-01
High rates of false monitoring alarms in intensive care can desensitize staff and therefore pose a significant risk to patient safety. Like other critical arrhythmia alarms, asystole alarms require immediate attention by the care providers as a true asystole event can be acutely life threatening. Here, it is illustrated that most false asystole alarms can be attributed to poor signal quality, and we propose and evaluate an algorithm to identify data windows of poor signal quality and thereby help suppress false asystole alarms. The algorithm combines intuitive signal-quality features (degree of signal saturation and baseline wander) and information from other physiological signals that might be available. Algorithm training and testing was performed on the MIMIC II and 2015 PhysioNet/Computing in Cardiology Challenge databases, respectively. The algorithm achieved an alarm specificity of 81.0% and sensitivity of 95.4%, missing only one out of 22 true asystole alarms. On a separate neonatal data set, the algorithm was able to reject 89.7% (890 out of 992) of false asystole alarms while keeping all 22 true events. The results show that the false asystole alarm rate can be significantly reduced through basic signal quality evaluation.
GraDit: graph-based data repair algorithm for multiple data edits rule violations
NASA Astrophysics Data System (ADS)
Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.
2018-03-01
Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.
Community detection in complex networks by using membrane algorithm
NASA Astrophysics Data System (ADS)
Liu, Chuang; Fan, Linan; Liu, Zhou; Dai, Xiang; Xu, Jiamei; Chang, Baoren
Community detection in complex networks is a key problem of network analysis. In this paper, a new membrane algorithm is proposed to solve the community detection in complex networks. The proposed algorithm is based on membrane systems, which consists of objects, reaction rules, and a membrane structure. Each object represents a candidate partition of a complex network, and the quality of objects is evaluated according to network modularity. The reaction rules include evolutionary rules and communication rules. Evolutionary rules are responsible for improving the quality of objects, which employ the differential evolutionary algorithm to evolve objects. Communication rules implement the information exchanged among membranes. Finally, the proposed algorithm is evaluated on synthetic, real-world networks with real partitions known and the large-scaled networks with real partitions unknown. The experimental results indicate the superior performance of the proposed algorithm in comparison with other experimental algorithms.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
A high-performance genetic algorithm: using traveling salesman problem as a case.
Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.
Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm
NASA Astrophysics Data System (ADS)
Wülker, Christian; Heinzer, Susanne; Börnert, Peter; Renisch, Steffen; Prevrhal, Sven
2015-03-01
Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17+/-2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.
NASA Astrophysics Data System (ADS)
Singh, B.; Goel, S.
2015-03-01
This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.
A fingerprint classification algorithm based on combination of local and global information
NASA Astrophysics Data System (ADS)
Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu
2011-12-01
Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.
The "Best Worst" Field Optimization and Focusing
NASA Technical Reports Server (NTRS)
Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark
2008-01-01
A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.
User's guide for ERB-7 SEFDT. Volume 3: Quality control report for year-2
NASA Technical Reports Server (NTRS)
Vasanth, K. L.
1984-01-01
Problems in the solar data generated by the Nimbus 7 satellite are discussed specifically for scientific users. Major and minor data flaws in the Solar and Earth Flux Data Tape (SEFDT) were identified, defined and categorized. Solar channel assembly misalignment, data gaps, and algorithm errors were among the problems described in detail. Solar flux density data derived from SEFDT are presented in graphical form.
Quality Assessment and Control of Finite Element Solutions.
1986-05-01
solutions. However, some special-purpose and pilot finite element systems have implemented adaptive algorithms 17 p." for practical performance studies ...simulator (SAFES code) developed at the University of Wyoming (Ref. 148); and the PROBE system developed by NOETIC Technologies Corporation in St. Louis (Ref...displacements. Recent studies have demonstrated that the accuracy and rate of convergence of stresses (and strains) r. depend on how (and where) they
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
Nagahama, Yuki; Shimobaba, Tomoyoshi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-05-01
A holographic projector utilizes holography techniques. However, there are several barriers to realizing holographic projections. One is deterioration of hologram image quality caused by speckle noise and ringing artifacts. The combination of the random phase-free method and the Gerchberg-Saxton (GS) algorithm has improved the image quality of holograms. However, the GS algorithm requires significant computation time. We propose faster methods for image quality improvement of random phase-free holograms using the characteristics of ringing artifacts.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.
Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon
2018-04-03
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.
Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael
2017-01-01
Abstract Background Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Methods Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Results Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). Conclusions A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. PMID:28645191
Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael
2017-09-01
Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
Formation of the predicted training parameters in the form of a discrete information stream
NASA Astrophysics Data System (ADS)
Smolentseva, T. E.; Sumin, V. I.; Zolnikov, V. K.; Lavlinsky, V. V.
2018-03-01
In work process of training in the form of a discrete information stream is considered. On each of stages of the considered process portions of the training information and quality of their assimilation are analysed. Individual characteristics and reaction trained for every portion of information on appropriate sections are defined. The control algorithm of training with the predicted number of control checks of the trainee who allows to define what operating influence is considered it is necessary to create for the trainee. On the basis of this algorithm the vector of probabilities of ignorance of elements of the training information is received. As a result of the conducted researches the algorithm on formation of the predicted training parameters is developed. In work the task of comparison of duration of training received experimentally with predicted on the basis of it is solved the conclusion is drawn on efficiency of formation of the predicted training parameters. The program complex on the basis of the values of individual parameters received as a result of experiments on each trainee who allows to calculate individual characteristics is developed, to form rating and to monitor process of change of parameters of training.
Dynamic soft variable structure control of singular systems
NASA Astrophysics Data System (ADS)
Liu, Yunlong; Zhang, Caihong; Gao, Cunchen
2012-08-01
The dynamic soft variable structure control (VSC) of singular systems is discussed in this paper. The definition of soft VSC and the design of its controller modes are given. The stability of singular systems with the dynamic soft VSC is proposed. The dynamic soft variable structure controller is designed, and the concrete algorithm on the dynamic soft VSC is given. The dynamic soft VSC of singular systems which was developed for the purpose of intentionally precluding chattering, achieving high regulation rates and shortening settling times enhanced the dynamic quality of the systems. It is illustrated the feasibility and validity of the proposed strategy by a simulation example, and an outlook on its auspicious further development is presented.
Prudhon, Claudine; de Radiguès, Xavier; Dale, Nancy; Checchi, Francesco
2011-11-09
Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.
2011-01-01
Background Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. Methods We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. Results The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Conclusion Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis. PMID:22071133
Gotlib Conn, Lesley; Nathens, Avery B; Perrier, Laure; Haas, Barbara; Watamaniuk, Aaron; Daniel Pereira, Diego; Zwaiman, Ashley; da Luz, Luis Teodoro
2018-05-09
Quality improvement (QI) is mandatory in trauma centres but there is no prescription for doing successful QI. Considerable variation in implementation strategies and inconsistent use of evidence-based protocols therefore exist across centres. The quality of reporting on these strategies may limit the transferability of successful initiatives across centres. This systematic review will assess the quality of reporting on guideline, protocol or algorithm implementation within a trauma centre in terms of the Revised Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0). We will search for English language articles published after 2010 in EMBASE, MEDLINE, CINAHL electronic databases and the Cochrane Central Register of Controlled Trials. The database search will be supplemented by searching trial registries and grey literature online. Included studies will evaluate the effectiveness of guideline implementation in terms of change in clinical practice or improvement in patient outcomes. The primary outcome will be a global score reporting the proportion of studies respecting at least 80% of the SQUIRE 2.0 criteria and will be obtained based on the 18-items identified in the SQUIRE 2.0 guidelines. Secondary outcome will be the risk of bias assessed with the Risk Of Bias In Non-randomised Studies- of Interventions tool for observational cohort studies and with the Cochrane Collaboration tool for randomised controlled trials. Meta-analyses will be conducted in randomised controlled trials to estimate the effectiveness of guideline implementation if studies are not heterogeneous. If meta-analyses are conducted, we will combine studies according to the risk of bias (low, moderate or high/unclear) in subgroup analyses. All study titles, abstracts and full-text screening will be completed independently and in duplicate by the review team members. Data extraction and risk of bias assessment will also be done independently and in duplicate. Results will be disseminated through scientific publication and conferences. CRD42018084273. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Smarda, M.; Alexopoulou, E.; Mazioti, A.; Kordolaimi, S.; Ploussi, A.; Priftis, K.; Efstathopoulos, E.
2015-09-01
Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions.
MapEdit: solution to continuous raster map creation
NASA Astrophysics Data System (ADS)
Rančić, Dejan; Djordjevi-Kajan, Slobodanka
2003-03-01
The paper describes MapEdit, MS Windows TM software for georeferencing and rectification of scanned paper maps. The software produces continuous raster maps which can be used as background in geographical information systems. Process of continuous raster map creation using MapEdit "mosaicking" function is also described as well as the georeferencing and rectification algorithms which are used in MapEdit. Our approach for georeferencing and rectification using four control points and two linear transformations for each scanned map part, together with nearest neighbor resampling method, represents low cost—high speed solution that produce continuous raster maps with satisfactory quality for many purposes (±1 pixel). Quality assessment of several continuous raster maps at different scales that have been created using our software and methodology, has been undertaken and results are presented in the paper. For the quality control of the produced raster maps we referred to three wide adopted standards: US Standard for Digital Cartographic Data, National Standard for Spatial Data Accuracy and US National Map Accuracy Standard. The results obtained during the quality assessment process are given in the paper and show that our maps meat all three standards.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
Zombie algorithms: a timesaving remote sensing systems engineering tool
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.; Powell, Dylan C.; Marley, Stephen
2008-08-01
In modern horror fiction, zombies are generally undead corpses brought back from the dead by supernatural or scientific means, and are rarely under anyone's direct control. They typically have very limited intelligence, and hunger for the flesh of the living [1]. Typical spectroradiometric or hyperspectral instruments providess calibrated radiances for a number of remote sensing algorithms. The algorithms typically must meet specified latency and availability requirements while yielding products at the required quality. These systems, whether research, operational, or a hybrid, are typically cost constrained. Complexity of the algorithms can be high, and may evolve and mature over time as sensor characterization changes, product validation occurs, and areas of scientific basis improvement are identified and completed. This suggests the need for a systems engineering process for algorithm maintenance that is agile, cost efficient, repeatable, and predictable. Experience on remote sensing science data systems suggests the benefits of "plug-n-play" concepts of operation. The concept, while intuitively simple, can be challenging to implement in practice. The use of zombie algorithms-empty shells that outwardly resemble the form, fit, and function of a "complete" algorithm without the implemented theoretical basis-provides the ground systems advantages equivalent to those obtained by integrating sensor engineering models onto the spacecraft bus. Combined with a mature, repeatable process for incorporating the theoretical basis, or scientific core, into the "head" of the zombie algorithm, along with associated scripting and registration, provides an easy "on ramp" for the rapid and low-risk integration of scientific applications into operational systems.
Albrich, Werner C; Dusemund, Frank; Bucher, Birgit; Meyer, Stefan; Thomann, Robert; Kühn, Felix; Bassetti, Stefano; Sprenger, Martin; Bachli, Esther; Sigrist, Thomas; Schwietert, Martin; Amin, Devendra; Hausfater, Pierre; Carre, Eric; Gaillat, Jacques; Schuetz, Philipp; Regez, Katharina; Bossart, Rita; Schild, Ursula; Mueller, Beat
2012-05-14
In controlled studies, procalcitonin (PCT) has safely and effectively reduced antibiotic drug use for lower respiratory tract infections (LRTIs). However, controlled trial data may not reflect real life. We performed an observational quality surveillance in 14 centers in Switzerland, France, and the United States. Consecutive adults with LRTI presenting to emergency departments or outpatient offices were enrolled and registered on a website, which provided a previously published PCT algorithm for antibiotic guidance. The primary end point was duration of antibiotic therapy within 30 days. Of 1759 patients, 86.4% had a final diagnosis of LRTI (community-acquired pneumonia, 53.7%; acute exacerbation of chronic obstructive pulmonary disease, 17.1%; and bronchitis, 14.4%). Algorithm compliance overall was 68.2%, with differences between diagnoses (bronchitis, 81.0%; AECOPD, 70.1%; and community-acquired pneumonia, 63.7%; P < .001), outpatients (86.1%) and inpatients (65.9%) (P < .001), algorithm-experienced (82.5%) and algorithm-naive (60.1%) centers (P < .001), and countries (Switzerland, 75.8%; France, 73.5%; and the United States, 33.5%; P < .001). After multivariate adjustment, antibiotic therapy duration was significantly shorter if the PCT algorithm was followed compared with when it was overruled (5.9 vs 7.4 days; difference, -1.51 days; 95% CI, -2.04 to -0.98; P < .001). No increase was noted in the risk of the combined adverse outcome end point within 30 days of follow-up when the PCT algorithm was followed regarding withholding antibiotics on hospital admission (adjusted odds ratio, 0.83; 95% CI, 0.44 to 1.55; P = .56) and regarding early cessation of antibiotics (adjusted odds ratio, 0.61; 95% CI, 0.36 to 1.04; P = .07). This study validates previous results from controlled trials in real-life conditions and demonstrates that following a PCT algorithm effectively reduces antibiotic use without increasing the risk of complications. Preexisting differences in antibiotic prescribing affect compliance with antibiotic stewardship efforts. isrctn.org Identifier: ISRCTN40854211.
Development of an Inverse Algorithm for Resonance Inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Wei; Sun, Xin
2012-10-01
Resonance inspection (RI), which employs the natural frequency spectra shift between the good and the anomalous part populations to detect defects, is a non-destructive evaluation (NDE) technique with many advantages such as low inspection cost, high testing speed, and broad applicability to structures with complex geometry compared to other contemporary NDE methods. It has already been widely used in the automobile industry for quality inspections of safety critical parts. Unlike some conventionally used NDE methods, the current RI technology is unable to provide details, i.e. location, dimension, or types, of the flaws for the discrepant parts. Such limitation severely hindersmore » its wide spread applications and further development. In this study, an inverse RI algorithm based on maximum correlation function is proposed to quantify the location and size of flaws for a discrepant part. A dog-bone shaped stainless steel sample with and without controlled flaws are used for algorithm development and validation. The results show that multiple flaws can be accurately pinpointed back using the algorithms developed, and the prediction accuracy decreases with increasing flaw numbers and decreasing distance between flaws.« less
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Algorithmic formulation of control problems in manipulation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1975-01-01
The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik; ...
2017-07-25
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.; Dorfler, Florian K.; Sandberg, Henrik
Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. Here, this paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Evaluating Land-Atmosphere Interactions with the North American Soil Moisture Database
NASA Astrophysics Data System (ADS)
Giles, S. M.; Quiring, S. M.; Ford, T.; Chavez, N.; Galvan, J.
2015-12-01
The North American Soil Moisture Database (NASMD) is a high-quality observational soil moisture database that was developed to study land-atmosphere interactions. It includes over 1,800 monitoring stations the United States, Canada and Mexico. Soil moisture data are collected from multiple sources, quality controlled and integrated into an online database (soilmoisture.tamu.edu). The period of record varies substantially and only a few of these stations have an observation record extending back into the 1990s. Daily soil moisture observations have been quality controlled using the North American Soil Moisture Database QAQC algorithm. The database is designed to facilitate observationally-driven investigations of land-atmosphere interactions, validation of the accuracy of soil moisture simulations in global land surface models, satellite calibration/validation for SMOS and SMAP, and an improved understanding of how soil moisture influences climate on seasonal to interannual timescales. This paper provides some examples of how the NASMD has been utilized to enhance understanding of land-atmosphere interactions in the U.S. Great Plains.
An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.
Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao
2016-09-01
The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.
Cai, Yong; Li, Xiwen; Li, Mei; Chen, Xiaojia; Hu, Hao; Ni, Jingyun; Wang, Yitao
2015-01-01
Chemical fingerprinting is currently a widely used tool that enables rapid and accurate quality evaluation of Traditional Chinese Medicine (TCM). However, chemical fingerprints are not amenable to information storage, recognition, and retrieval, which limit their use in Chinese medicine traceability. In this study, samples of three kinds of Chinese medicines were randomly selected and chemical fingerprints were then constructed by using high performance liquid chromatography. Based on chemical data, the process of converting the TCM chemical fingerprint into two-dimensional code is presented; preprocess and filtering algorithm are also proposed aiming at standardizing the large amount of original raw data. In order to know which type of two-dimensional code (2D) is suitable for storing data of chemical fingerprints, current popular types of 2D codes are analyzed and compared. Results show that QR Code is suitable for recording the TCM chemical fingerprint. The fingerprint information of TCM can be converted into data format that can be stored as 2D code for traceability and quality control.
Landsat Thematic Mapper monitoring of turbid inland water quality
NASA Technical Reports Server (NTRS)
Lathrop, Richard G., Jr.
1992-01-01
This study reports on an investigation of water quality calibration algorithms under turbid inland water conditions using Landsat Thematic Mapper (TM) multispectral digital data. TM data and water quality observations (total suspended solids and Secchi disk depth) were obtained near-simultaneously and related using linear regression techniques. The relationships between reflectance and water quality for Green Bay and Lake Michigan were compared with results for Yellowstone and Jackson Lakes, Wyoming. Results show similarities in the water quality-reflectance relationships, however, the algorithms derived for Green Bay - Lake Michigan cannot be extrapolated to Yellowstone and Jackson Lake conditions.
A Robustly Stabilizing Model Predictive Control Algorithm
NASA Technical Reports Server (NTRS)
Ackmece, A. Behcet; Carson, John M., III
2007-01-01
A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.
Rapid evaluation and quality control of next generation sequencing data with FaQCs.
Lo, Chien-Chi; Chain, Patrick S G
2014-11-19
Next generation sequencing (NGS) technologies that parallelize the sequencing process and produce thousands to millions, or even hundreds of millions of sequences in a single sequencing run, have revolutionized genomic and genetic research. Because of the vagaries of any platform's sequencing chemistry, the experimental processing, machine failure, and so on, the quality of sequencing reads is never perfect, and often declines as the read is extended. These errors invariably affect downstream analysis/application and should therefore be identified early on to mitigate any unforeseen effects. Here we present a novel FastQ Quality Control Software (FaQCs) that can rapidly process large volumes of data, and which improves upon previous solutions to monitor the quality and remove poor quality data from sequencing runs. Both the speed of processing and the memory footprint of storing all required information have been optimized via algorithmic and parallel processing solutions. The trimmed output compared side-by-side with the original data is part of the automated PDF output. We show how this tool can help data analysis by providing a few examples, including an increased percentage of reads recruited to references, improved single nucleotide polymorphism identification as well as de novo sequence assembly metrics. FaQCs combines several features of currently available applications into a single, user-friendly process, and includes additional unique capabilities such as filtering the PhiX control sequences, conversion of FASTQ formats, and multi-threading. The original data and trimmed summaries are reported within a variety of graphics and reports, providing a simple way to do data quality control and assurance.
An image-guided tool to prevent hospital acquired infections
NASA Astrophysics Data System (ADS)
Nagy, Melinda; Szilágyi, László; Lehotsky, Ákos; Haidegger, Tamás; Benyó, Balázs
2011-03-01
Hospital Acquired Infections (HAI) represent the fourth leading cause of death in the United States, and claims hundreds of thousands of lives annually in the rest of the world. This paper presents a novel low-cost mobile device|called Stery-Hand|that helps to avoid HAI by improving hand hygiene control through providing an objective evaluation of the quality of hand washing. The use of the system is intuitive: having performed hand washing with a soap mixed with UV re ective powder, the skin appears brighter in UV illumination on the disinfected surfaces. Washed hands are inserted into the Stery-Hand box, where a digital image is taken under UV lighting. Automated image processing algorithms are employed in three steps to evaluate the quality of hand washing. First, the contour of the hand is extracted in order to distinguish the hand from the background. Next, a semi-supervised clustering algorithm classies the pixels of the hand into three groups, corresponding to clean, partially clean and dirty areas. The clustering algorithm is derived from the histogram-based quick fuzzy c-means approach, using a priori information extracted from reference images, evaluated by experts. Finally, the identied areas are adjusted to suppress shading eects, and quantied in order to give a verdict on hand disinfection quality. The proposed methodology was validated through tests using hundreds of images recorded in our laboratory. The proposed system was found robust and accurate, producing correct estimation for over 98% of the test cases. Stery-Hand may be employed in general practice, and it may also serve educational purposes.
The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2003 update.
Miller, Alexander L; Hall, Catherine S; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Ereshefsky, Larry; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Rush, A John; Saeed, Sy A; Schooler, Nina R; Shon, Steven P; Stroup, Scott; Tarin-Godoy, Bernardo
2004-04-01
The Texas Medication Algorithm Project (TMAP) has been a public-academic collaboration in which guidelines for medication treatment of schizophrenia, bipolar disorder, and major depressive disorder were used in selected public outpatient clinics in Texas. Subsequently, these algorithms were implemented throughout Texas and are being used in other states. Guidelines require updating when significant new evidence emerges; the antipsychotic algorithm for schizophrenia was last updated in 1999. This article reports the recommendations developed in 2002 and 2003 by a group of experts, clinicians, and administrators. A conference in January 2002 began the update process. Before the conference, experts in the pharmacologic treatment of schizophrenia, clinicians, and administrators reviewed literature topics and prepared presentations. Topics included ziprasidone's inclusion in the algorithm, the number of antipsychotics tried before clozapine, and the role of first generation antipsychotics. Data were rated according to Agency for Healthcare Research and Quality criteria. After discussing the presentations, conference attendees arrived at consensus recommendations. Consideration of aripiprazole's inclusion was subsequently handled by electronic communications. The antipsychotic algorithm for schizophrenia was updated to include ziprasidone and aripiprazole among the first-line agents. Relative to the prior algorithm, the number of stages before clozapine was reduced. First generation antipsychotics were included but not as first-line choices. For patients refusing or not responding to clozapine and clozapine augmentation, preference was given to trying monotherapy with another antipsychotic before resorting to antipsychotic combinations. Consensus on algorithm revisions was achieved, but only further well-controlled research will answer many key questions about sequence and type of medication treatments of schizophrenia.
Emergency ultrasound-based algorithms for diagnosing blunt abdominal trauma.
Stengel, Dirk; Bauwens, Kai; Rademacher, Grit; Ekkernkamp, Axel; Güthoff, Claas
2013-07-31
Ultrasonography is regarded as the tool of choice for early diagnostic investigations in patients with suspected blunt abdominal trauma. Although its sensitivity is too low for definite exclusion of abdominal organ injury, proponents of ultrasound argue that ultrasound-based clinical pathways enhance the speed of primary trauma assessment, reduce the number of computed tomography scans and cut costs. To assess the effects of trauma algorithms that include ultrasound examinations in patients with suspected blunt abdominal trauma. We searched the Cochrane Injuries Group's Specialised Register, CENTRAL (The Cochrane Library), MEDLINE (OvidSP), EMBASE (OvidSP), CINAHL (EBSCO), publishers' databases, controlled trials registers and the Internet. Bibliographies of identified articles and conference abstracts were searched for further elligible studies. Trial authors were contacted for further information and individual patient data. The searches were updated in February 2013. randomised controlled trials (RCTs) and quasi-randomised trials (qRCTs). patients with blunt torso, abdominal or multiple trauma undergoing diagnostic investigations for abdominal organ injury. diagnostic algorithms comprising emergency ultrasonography (US). diagnostic algorithms without ultrasound examinations (for example, primary computed tomography [CT] or diagnostic peritoneal lavage [DPL]). mortality, use of CT and DPL, cost-effectiveness, laparotomy and negative laparotomy rates, delayed diagnoses, and quality of life. Two authors independently selected trials for inclusion, assessed methodological quality and extracted data. Where possible, data were pooled and relative risks (RRs), risk differences (RDs) and weighted mean differences, each with 95% confidence intervals (CIs), were calculated by fixed- or random-effects modelling, as appropriate. We identified four studies meeting our inclusion criteria. Overall, trials were of moderate methodological quality. Few trial authors responded to our written inquiries seeking to resolve controversial issues and to obtain individual patient data. We pooled mortality data from three trials involving 1254 patients; relative risk in favour of the US arm was 1.00 (95% CI 0.50 to 2.00). US-based pathways significantly reduced the number of CT scans (random-effects RD -0.52, 95% CI -0.83 to -0.21), but the meaning of this result is unclear. Given the low sensitivity of ultrasound, the reduction in CT scans may either translate to a number needed to treat or number needed to harm of two. There is currently insufficient evidence from RCTs to justify promotion of ultrasound-based clinical pathways in diagnosing patients with suspected blunt abdominal trauma.
A deblocking algorithm based on color psychology for display quality enhancement
NASA Astrophysics Data System (ADS)
Yeh, Chia-Hung; Tseng, Wen-Yu; Huang, Kai-Lin
2012-12-01
This article proposes a post-processing deblocking filter to reduce blocking effects. The proposed algorithm detects blocking effects by fusing the results of Sobel edge detector and wavelet-based edge detector. The filtering stage provides four filter modes to eliminate blocking effects at different color regions according to human color vision and color psychology analysis. Experimental results show that the proposed algorithm has better subjective and objective qualities for H.264/AVC reconstructed videos when compared to several existing methods.
Reznitsky, P A; Yartsev, P A; Shavrina, N V
To assess an effectiveness of minimally invasive and laparoscopic technologies in treatment of inflammatory complications of colic diverticular disease. The study included 150 patients who were divided into control and main groups. Survey included ultrasound, X-ray examination and abdominal computerized tomography. In the main group standardized treatment algorithm including minimally invasive and laparoscopic technologies was used. In the main group 79 patients underwent conservative treatment, minimally invasive (ultrasound-assisted percutaneous drainage of abscesses) and laparoscopic surgery that was successful in 78 (98.7%) patients. Standardized algorithm reduces time of treatment, incidence of postoperative complications, mortality and the risk of recurrent inflammatory complications of colic diverticular disease. Also postoperative quality of life was improved.
Analysis of methods of processing of expert information by optimization of administrative decisions
NASA Astrophysics Data System (ADS)
Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.
2018-03-01
In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.
Low-complexity camera digital signal imaging for video document projection system
NASA Astrophysics Data System (ADS)
Hsia, Shih-Chang; Tsai, Po-Shien
2011-04-01
We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.
Dynamic Online Bandwidth Adjustment Scheme Based on Kalai-Smorodinsky Bargaining Solution
NASA Astrophysics Data System (ADS)
Kim, Sungwook
Virtual Private Network (VPN) is a cost effective method to provide integrated multimedia services. Usually heterogeneous multimedia data can be categorized into different types according to the required Quality of Service (QoS). Therefore, VPN should support the prioritization among different services. In order to support multiple types of services with different QoS requirements, efficient bandwidth management algorithms are important issues. In this paper, I employ the Kalai-Smorodinsky Bargaining Solution (KSBS) for the development of an adaptive bandwidth adjustment algorithm. In addition, to effectively manage the bandwidth in VPNs, the proposed control paradigm is realized in a dynamic online approach, which is practical for real network operations. The simulations show that the proposed scheme can significantly improve the system performances.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.
Identify High-Quality Protein Structural Models by Enhanced K-Means.
Wu, Hongjie; Li, Haiou; Jiang, Min; Chen, Cheng; Lv, Qiang; Wu, Chuang
2017-01-01
Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K -means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K -means clustering ( SK -means), whereas the other employs squared distance to optimize the initial centroids ( K -means++). Our results showed that SK -means and K -means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K -means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK -means and K -means++ demonstrated substantial improvements relative to results from SPICKER and classical K -means.
Identify High-Quality Protein Structural Models by Enhanced K-Means
Li, Haiou; Chen, Cheng; Lv, Qiang; Wu, Chuang
2017-01-01
Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K-means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K-means clustering (SK-means), whereas the other employs squared distance to optimize the initial centroids (K-means++). Our results showed that SK-means and K-means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K-means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK-means and K-means++ demonstrated substantial improvements relative to results from SPICKER and classical K-means. PMID:28421198
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
Verret, Lucie; Couturier, Justine; Rozon, Andréanne; Saudrais-Janecek, Sarah; St-Onge, Amélie; Nguyen, Angela; Basmadjian, Arsène; Tremblay, Simon; Brouillette, Denis; de Denus, Simon
2012-10-01
To evaluate the impact of a pharmacist-led warfarin patient self-management program on quality of life and anticoagulation control compared with management in a physician-led specialized anticoagulation clinic. Prospective, randomized, controlled, open-label trial. Tertiary care academic medical center. A total of 114 patients aged 18-75 years who were followed at a specialized anticoagulation clinic, had received warfarin for at least 6 months, and were expected to continue warfarin for a minimum of 4 months. All patients attended an educational session on anticoagulation provided by a pharmacist. Patients randomized to the self-management group (58 patients) also received practical training to use the CoaguChek XS device and a self-management dosing algorithm. Patients in the control group (56 patients) continued to undergo standard management at the anticoagulation clinic. Patients completed a validated quality-of-life questionnaire and the validated Oral Anticoagulation Knowledge test at the beginning and end of the study. The quality of anticoagulation control was evaluated by using the time spent in therapeutic range. After 4 months of follow-up, a significant improvement in the self-management group was observed compared with the control group in four of the five quality-of-life topics (p<0.05). Improvements in knowledge were observed in both groups after the training session and persisted after 4 months (p<0.05 for all). The time spent in the therapeutic range (80.0% in the self-management group vs 75% in the control group, p=0.79) and in the extended therapeutic range ([target international normalized ratio ± 0.3] 93.2% in the self-management group vs 91.1% in the control group, p=0.30) were similar between groups. A self-management warfarin program led by pharmacists resulted in significant improvement in the quality of life of patients receiving warfarin therapy as well as a reduction in the time required for anticoagulation monitoring, while maintaining a level of anticoagulation control similar to a high-quality specialized anticoagulation clinic. © 2012 Pharmacotherapy Publications, Inc.
Development of a solar-powered electric bicycle in bike sharing transportation system
NASA Astrophysics Data System (ADS)
Adhisuwignjo, S.; Siradjuddin, I.; Rifa'i, M.; Putri, R. I.
2017-06-01
The increasing mobility has directly led to deteriorating traffic conditions, extra fuel consumption, increasing automobile exhaust emissions, air pollution and lowering quality of life. Apart from being clean, cheap and equitable mode of transport for short-distance journeys, cycling can potentially offer solutions to the problem of urban mobility. Many cities have tried promoting cycling particularly through the implementation of bike-sharing. Apparently the fourth generation bikesharing system has been promoted utilizing electric bicycles which considered as a clean technology implementation. Utilization of solar power is probably the development keys in the fourth generation bikesharing system and will become the standard in bikesharing system in the future. Electric bikes use batteries as a source of energy, thus they require a battery charger system which powered from the solar cells energy. This research aims to design and implement electric bicycle battery charging system with solar energy sources using fuzzy logic algorithm. It is necessary to develop an electric bicycle battery charging system with solar energy sources using fuzzy logic algorithm. The study was conducted by means of experimental method which includes the design, manufacture and testing controller systems. The designed fuzzy algorithm have been planted in EEPROM microcontroller ATmega8535. The charging current was set at 1.2 Amperes and the full charged battery voltage was observed to be 40 Volts. The results showed a fuzzy logic controller was able to maintain the charging current of 1.2 Ampere with an error rate of less than 5% around the set point. The process of charging electric bike lead acid batteries from empty to fully charged was 5 hours. In conclusion, the development of solar-powered electric bicycle controlled using fuzzy logic controller can keep the battery charging current in solar-powered electric bicycle to remain stable. This shows that the fuzzy algorithm can be used as a controller in the process of charging for a solar electric bicycle.
Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs
NASA Astrophysics Data System (ADS)
Buder, Maximilian
2012-06-01
This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.
A generalised significance test for individual communities in networks.
Kojaku, Sadamori; Masuda, Naoki
2018-05-09
Many empirical networks have community structure, in which nodes are densely interconnected within each community (i.e., a group of nodes) and sparsely across different communities. Like other local and meso-scale structure of networks, communities are generally heterogeneous in various aspects such as the size, density of edges, connectivity to other communities and significance. In the present study, we propose a method to statistically test the significance of individual communities in a given network. Compared to the previous methods, the present algorithm is unique in that it accepts different community-detection algorithms and the corresponding quality function for single communities. The present method requires that a quality of each community can be quantified and that community detection is performed as optimisation of such a quality function summed over the communities. Various community detection algorithms including modularity maximisation and graph partitioning meet this criterion. Our method estimates a distribution of the quality function for randomised networks to calculate a likelihood of each community in the given network. We illustrate our algorithm by synthetic and empirical networks.
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems
Huang, Shuqiang; Tao, Ming
2017-01-01
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735
Harmonisation Initiatives of Copernicus Data Quality Control
NASA Astrophysics Data System (ADS)
Vescovi, F. D.; Lankester, T.; Coleman, E.; Ottavianelli, G.
2015-04-01
The Copernicus Space Component Data Access system (CSCDA) incorporates data contributions from a wide range of satellite missions. Through EO data handling and distribution, CSCDA serves a set of Copernicus Services related to Land, Marine and Atmosphere Monitoring, Emergency Management and Security and Climate Change. The quality of the delivered EO products is the responsibility of each contributing mission, and the Copernicus data Quality Control (CQC) service supports and complements such data quality control activities. The mission of the CQC is to provide a service of quality assessment on the provided imagery, to support the investigation related to product quality anomalies, and to guarantee harmonisation and traceability of the quality information. In terms of product quality control, the CQC carries out analysis of representative sample products for each contributing mission as well as coordinating data quality investigation related to issues found or raised by Copernicus users. Results from the product analysis are systematically collected and the derived quality reports stored in a searchable database. The CQC service can be seen as a privileged focal point with unique comparison capacities over the data providers. The comparison among products from different missions suggests the need for a strong, common effort of harmonisation. Technical terms, definitions, metadata, file formats, processing levels, algorithms, cal/val procedures etc. are far from being homogeneous, and this may generate inconsistencies and confusion among users of EO data. The CSCDA CQC team plays a significant role in promoting harmonisation initiatives across the numerous contributing missions, so that a common effort can achieve optimal complementarity and compatibility among the EO data from multiple data providers. This effort is done in coordination with important initiatives already working towards these goals (e.g. INSPIRE directive, CEOS initiatives, OGC standards, QA4EO etc.). This paper describes the main actions being undertaken by CQC to encourage harmonisation among space-based EO systems currently in service.
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.; Bertacchi, S.
2014-06-01
Image-based modelling tools based on SfM algorithms gained great popularity since several software houses provided applications able to achieve 3D textured models easily and automatically. The aim of this paper is to point out the importance of controlling models parameterization process, considering that automatic solutions included in these modelling tools can produce poor results in terms of texture utilization. In order to achieve a better quality of textured models from image-based modelling applications, this research presents a series of practical strategies aimed at providing a better balance between geometric resolution of models from passive sensors and their corresponding (u,v) map reference systems. This aspect is essential for the achievement of a high-quality 3D representation, since "apparent colour" is a fundamental aspect in the field of Cultural Heritage documentation. Complex meshes without native parameterization have to be "flatten" or "unwrapped" in the (u,v) parameter space, with the main objective to be mapped with a single image. This result can be obtained by using two different strategies: the former automatic and faster, while the latter manual and time-consuming. Reverse modelling applications provide automatic solutions based on splitting the models by means of different algorithms, that produce a sort of "atlas" of the original model in the parameter space, in many instances not adequate and negatively affecting the overall quality of representation. Using in synergy different solutions, ranging from semantic aware modelling techniques to quad-dominant meshes achieved using retopology tools, it is possible to obtain a complete control of the parameterization process.
Subjective comparison and evaluation of speech enhancement algorithms
Hu, Yi; Loizou, Philipos C.
2007-01-01
Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463
NASA Astrophysics Data System (ADS)
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
NASA Astrophysics Data System (ADS)
Lubey, D.; Scheeres, D.
Tracking objects in Earth orbit is fraught with complications. This is due to the large population of orbiting spacecraft and debris that continues to grow, passive (i.e. no direct communication) and data-sparse observations, and the presence of maneuvers and dynamics mismodeling. Accurate orbit determination in this environment requires an algorithm to capture both a system's state and its state dynamics in order to account for mismodelings. Previous studies by the authors yielded an algorithm called the Optimal Control Based Estimator (OCBE) - an algorithm that simultaneously estimates a system's state and optimal control policies that represent dynamic mismodeling in the system for an arbitrary orbit-observer setup. The stochastic properties of these estimated controls are then used to determine the presence of mismodelings (maneuver detection), as well as characterize and reconstruct the mismodelings. The purpose of this paper is to develop the OCBE into an accurate real-time orbit tracking and maneuver detection algorithm by automating the algorithm and removing its linear assumptions. This results in a nonlinear adaptive estimator. In its original form the OCBE had a parameter called the assumed dynamic uncertainty, which is selected by the user with each new measurement to reflect the level of dynamic mismodeling in the system. This human-in-the-loop approach precludes real-time application to orbit tracking problems due to their complexity. This paper focuses on the Adaptive OCBE, a version of the estimator where the assumed dynamic uncertainty is chosen automatically with each new measurement using maneuver detection results to ensure that state uncertainties are properly adjusted to account for all dynamic mismodelings. The paper also focuses on a nonlinear implementation of the estimator. Originally, the OCBE was derived from a nonlinear cost function then linearized about a nominal trajectory, which is assumed to be ballistic (i.e. the nominal optimal control policy is zero for all times). In this paper, we relax this assumption on the nominal trajectory in order to allow for controlled nominal trajectories. This allows the estimator to be iterated to obtain a more accurate nonlinear solution for both the state and control estimates. Beyond these developments to the estimator, this paper also introduces a modified distance metric for maneuver detection. The original metric used in the OCBE only accounted for the estimated control and its uncertainty. This new metric accounts for measurement deviation and a priori state deviations, such that it accounts for all three major forms of uncertainty in orbit determination. This allows the user to understand the contributions of each source of uncertainty toward the total system mismodeling so that the user can properly account for them. Together these developments create an accurate orbit determination algorithm that is automated, robust to mismodeling, and capable of detecting and reconstructing the presence of mismodeling. These qualities make this algorithm a good foundation from which to approach the problem of real-time maneuver detection and reconstruction for Space Situational Awareness applications. This is further strengthened by the algorithm's general formulation that allows it to be applied to problems with an arbitrary target and observer.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Neural manufacturing: a novel concept for processing modeling, monitoring, and control
NASA Astrophysics Data System (ADS)
Fu, Chi Y.; Petrich, Loren; Law, Benjamin
1995-09-01
Semiconductor fabrication lines have become extremely costly, and achieving a good return from such a high capital investment requires efficient utilization of these expensive facilities. It is highly desirable to shorten processing development time, increase fabrication yield, enhance flexibility, improve quality, and minimize downtime. We propose that these ends can be achieved by applying recent advances in the areas of artificial neural networks, fuzzy logic, machine learning, and genetic algorithms. We use the term neural manufacturing to describe such applications. This paper describes our use of artificial neural networks to improve the monitoring and control of semiconductor process.
1990-03-01
Dist~i~ Ulnre rbu~ on Una o~d0 9 ~ j 2 0 0 TNO rapport Pagina rappon no. .FEL-89-A312 f"It Kwaliteit van Expertsystemen: Algoritmen voor Integriteits...KNOWLEDGEBASE 7 2.1 Inleiding 7 2.2 Een uitbreiding op NIAM: E(xtended)NIAM 8 2.3 Specificatie in E(xtended)NIAM 11 2.4 Representatie in Prolog 13 3...instantiatie van cen ThO rapport Pagina 9 ’graph’ is gelijk aan ten propositie (ean uitspraak over dea werkelijkheid). ’Graph’- instantiaties zijn
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de
2010-06-30
In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.
Thermal weapon sights with integrated fire control computers: algorithms and experiences
NASA Astrophysics Data System (ADS)
Rothe, Hendrik; Graswald, Markus; Breiter, Rainer
2008-04-01
The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.
Improving best-phase image quality in cardiac CT by motion correction with MAM optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl
2013-03-15
Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Li, Kejia; Warren, Steve; Natarajan, Balasubramaniam
2012-02-01
Onboard assessment of photoplethysmogram (PPG) quality could reduce unnecessary data transmission on battery-powered wireless pulse oximeters and improve the viability of the electronic patient records to which these data are stored. These algorithms show promise to increase the intelligence level of former "dumb" medical devices: devices that acquire and forward data but leave data interpretation to the clinician or host system. To this end, the authors have developed a unique onboard feature detection algorithm to assess the quality of PPGs acquired with a custom reflectance mode, wireless pulse oximeter. The algorithm uses a Bayesian hypothesis testing method to analyze four features extracted from raw and decimated PPG data in order to determine whether the original data comprise valid PPG waveforms or whether they are corrupted by motion or other environmental influences. Based on these results, the algorithm further calculates heart rate and blood oxygen saturation from a "compact representation" structure. PPG data were collected from 47 subjects to train the feature detection algorithm and to gauge their performance. A MATLAB interface was also developed to visualize the features extracted, the algorithm flow, and the decision results, where all algorithm-related parameters and decisions were ascertained on the wireless unit prior to transmission. For the data sets acquired here, the algorithm was 99% effective in identifying clean, usable PPGs versus nonsaturated data that did not demonstrate meaningful pulsatile waveshapes, PPGs corrupted by motion artifact, and data affected by signal saturation.
Shannon, Ronald J; Brown, Lynne; Chakravarthy, Debashish
2012-10-01
This article assesses the comparative prevention-effectiveness and economic implications of a Pressure Ulcer Prevention Program (PUPP) against standard practice of prevention using Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality [AHRQ]) guidelines and a mixture of commercial products. The study is a randomized, controlled, prospective cohort study with an accompanying economic evaluation. The economic evaluation is performed from the perspective of the nursing and rehabilitation centers. Two nursing and rehabilitation centers under the same quality and safety support organization. Both institutions are experiencing high nursing staff turnover and incidence of pressure ulcers (PrUs). 133 residents at risk of developing PrUs (EQUIP-for-Quality Risk Score Moderate to Very High [MVH]). All are Medicare-eligible residents with Minimum Data Set (MDS) 2.0 evaluations. The PUPP includes a strategic product bundle and decision algorithms driven by MDS 2.0 Resident Assessment Scores to assist in reducing or preventing PrUs and incontinence-associated skin conditions. The control group utilizes a different brand and assortment of commercial skin care products, briefs, pads, and mattresses, but without use of the decision algorithms driven by MDS 2.0 Resident Assessment Scores. Pressure ulcer prevention education was done for all nurses by a nurse certified in the PUPP program at the beginning and ad libitum by trained senior nursing staff at the end of the study. Comparative reduction in the incidence of nosocomial PrUs and average 6-month net cost savings per MVH-risk resident. Residents were assessed for PrU risk using EQUIP-for-Quality risk assessment algorithm based on data from their Minimum Data Set (MDS 2.0), then assigned to either the PUPP program or control group (standard practice following AHRQ guidelines). Residents were followed until discharge, death, development of PrU, or a maximum time period of 6 months. Direct medical costs of prevention and PrU treatment were recorded using a modified activity-based costing method. A decision model was used to estimate the net cost savings attributed to the PUPP program over a 6-month period. A 67% reduction in the incidence of nosocomial pressure ulcers is attributable to the PUPP strategy over a 6-month period for MVH residents. The average 6-month cost for a MVH Medicare resident is $1928 and $1130 for the control group and PUPP group respectively. Mean difference (net cost savings per resident at risk of pressure ulceration) is $798 per resident for PUPP. PUPP assisted in reducing the incidence of PrUs by 67% in a 6-month period in nursing home facilities. The estimated annual net cost savings attributed to PUPP for 300 MVH residents is estimated at approximately $240,000.
The impact of database quality on keystroke dynamics authentication
NASA Astrophysics Data System (ADS)
Panasiuk, Piotr; Rybnik, Mariusz; Saeed, Khalid; Rogowski, Marcin
2016-06-01
This paper concerns keystroke dynamics, also partially in the context of touchscreen devices. The authors concentrate on the impact of database quality and propose their algorithm to test database quality issues. The algorithm is used on their own
Deutsch, Eliza S; Alameddine, Ibrahim; El-Fadel, Mutasem
2018-02-15
The launch of the Landsat 8 in February 2013 extended the life of the Landsat program to over 40 years, increasing the value of using Landsat to monitor long-term changes in the water quality of small lakes and reservoirs, particularly in poorly monitored freshwater systems. Landsat-based water quality hindcasting often incorporate several Landsat sensors in an effort to increase the temporal range of observations; yet the transferability of water quality algorithms across sensors remains poorly examined. In this study, several empirical algorithms were developed to quantify chlorophyll-a, total suspended matter (TSM), and Secchi disk depth (SDD) from surface reflectance measured by Landsat 7 ETM+ and Landsat 8 OLI sensors. Sensor-specific multiple linear regression models were developed by correlating in situ water quality measurements collected from a semi-arid eutrophic reservoir with band ratios from Landsat ETM+ and OLI sensors, along with ancillary data (water temperature and seasonality) representing ecological patterns in algae growth. Overall, ETM+-based models outperformed (adjusted R 2 chlorophyll-a = 0.70, TSM = 0.81, SDD = 0.81) their OLI counterparts (adjusted R 2 chlorophyll-a = 0.50, TSM = 0.58, SDD = 0.63). Inter-sensor differences were most apparent for algorithms utilizing the Blue spectral band. The inclusion of water temperature and seasonality improved the power of TSM and SDD models.
Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network
NASA Astrophysics Data System (ADS)
Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.
Multifeature-based high-resolution palmprint recognition.
Dai, Jifeng; Zhou, Jie
2011-05-01
Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10(-5), while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.
Evaluation of peak picking quality in LC-MS metabolomics data.
Brodsky, Leonid; Moussaieff, Arieh; Shahaf, Nir; Aharoni, Asaph; Rogachev, Ilana
2010-11-15
The output of LC-MS metabolomics experiments consists of mass-peak intensities identified through a peak-picking/alignment procedure. Besides imperfections in biological samples and instrumentation, data accuracy is highly dependent on the applied algorithms and their parameters. Consequently, quality control (QC) is essential for further data analysis. Here, we present a QC approach that is based on discrepancies between replicate samples. First, the quantile normalization of per-sample log-signal distributions is applied to each group of biologically homogeneous samples. Next, the overall quality of each replicate group is characterized by the Z-transformed correlation coefficients between samples. This general QC allows a tuning of the procedure's parameters which minimizes the inter-replicate discrepancies in the generated output. Subsequently, an in-depth QC measure detects local neighborhoods on a template of aligned chromatograms that are enriched by divergences between intensity profiles of replicate samples. These neighborhoods are determined through a segmentation algorithm. The retention time (RT)-m/z positions of the neighborhoods with local divergences are indicative of either: incorrect alignment of chromatographic features, technical problems in the chromatograms, or to a true biological discrepancy between replicates for particular metabolites. We expect this method to aid in the accurate analysis of metabolomics data and in the development of new peak-picking/alignment procedures.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
NASA Astrophysics Data System (ADS)
Zhang, Xianxia; Wang, Jian; Qin, Tinggao
2003-09-01
Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.
Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong
2016-04-01
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
Performance of biometric quality measures.
Grother, Patrick; Tabassi, Elham
2007-04-01
We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
QSRA: a quality-value guided de novo short read assembler.
Bryant, Douglas W; Wong, Weng-Keen; Mockler, Todd C
2009-02-24
New rapid high-throughput sequencing technologies have sparked the creation of a new class of assembler. Since all high-throughput sequencing platforms incorporate errors in their output, short-read assemblers must be designed to account for this error while utilizing all available data. We have designed and implemented an assembler, Quality-value guided Short Read Assembler, created to take advantage of quality-value scores as a further method of dealing with error. Compared to previous published algorithms, our assembler shows significant improvements not only in speed but also in output quality. QSRA generally produced the highest genomic coverage, while being faster than VCAKE. QSRA is extremely competitive in its longest contig and N50/N80 contig lengths, producing results of similar quality to those of EDENA and VELVET. QSRA provides a step closer to the goal of de novo assembly of complex genomes, improving upon the original VCAKE algorithm by not only drastically reducing runtimes but also increasing the viability of the assembly algorithm through further error handling capabilities.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
Document localization algorithms based on feature points and straight lines
NASA Astrophysics Data System (ADS)
Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor
2018-04-01
The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.
QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm.
Bao, Ying; Lei, Weimin; Zhang, Wei; Zhan, Yuzhuo
2016-01-01
At present, to realize or improve the quality of experience (QoE) is a major goal for network media transmission service, and QoE evaluation is the basis for adjusting the transmission control mechanism. Therefore, a kind of QoE collaborative evaluation method based on fuzzy clustering heuristic algorithm is proposed in this paper, which is concentrated on service score calculation at the server side. The server side collects network transmission quality of service (QoS) parameter, node location data, and user expectation value from client feedback information. Then it manages the historical data in database through the "big data" process mode, and predicts user score according to heuristic rules. On this basis, it completes fuzzy clustering analysis, and generates service QoE score and management message, which will be finally fed back to clients. Besides, this paper mainly discussed service evaluation generative rules, heuristic evaluation rules and fuzzy clustering analysis methods, and presents service-based QoE evaluation processes. The simulation experiments have verified the effectiveness of QoE collaborative evaluation method based on fuzzy clustering heuristic rules.
Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry
NASA Astrophysics Data System (ADS)
Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold
2017-03-01
Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.
Ultrasound speckle reduction based on fractional order differentiation.
Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng
2017-07-01
Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.
Control of equipment isolation system using wavelet-based hybrid sliding mode control
NASA Astrophysics Data System (ADS)
Huang, Shieh-Kung; Loh, Chin-Hsiung
2017-04-01
Critical non-structural equipment, including life-saving equipment in hospitals, circuit breakers, computers, high technology instrumentations, etc., is vulnerable to strong earthquakes, and on top of that, the failure of the vibration-sensitive equipment will cause severe economic loss. In order to protect vibration-sensitive equipment or machinery against strong earthquakes, various innovative control algorithms are developed to compensate the internal forces that to be applied. These new or improved control strategies, such as the control algorithms based on optimal control theory and sliding mode control (SMC), are also developed for structures engineering as a key element in smart structure technology. The optimal control theory, one of the most common methodologies in feedback control, finds control forces through achieving a certain optimal criterion by minimizing a cost function. For example, the linear-quadratic regulator (LQR) was the most popular control algorithm over the past three decades, and a number of modifications have been proposed to increase the efficiency of classical LQR algorithm. However, except to the advantage of simplicity and ease of implementation, LQR are susceptible to parameter uncertainty and modeling error due to complex nature of civil structures. Different from LQR control, a robust and easy to be implemented control algorithm, SMC has also been studied. SMC is a nonlinear control methodology that forces the structural system to slide along surfaces or boundaries; hence this control algorithm is naturally robust with respect to parametric uncertainties of a structure. Early attempts at protecting vibration-sensitive equipment were based on the use of existing control algorithms as described above. However, in recent years, researchers have tried to renew the existing control algorithms or developing a new control algorithm to adapt the complex nature of civil structures which include the control of both structures and non-structural components. The aim of this paper is to develop a hybrid control algorithm on the control of both structures and equipments simultaneously to overcome the limitations of classical feedback control through combining the advantage of classic LQR and SMC. To suppress vibrations with the frequency contents of strong earthquakes differing from the natural frequencies of civil structures, the hybrid control algorithms integrated with the wavelet-base vibration control algorithm is developed. The performance of classical, hybrid, and wavelet-based hybrid control algorithms as well as the responses of structure and non-structural components are evaluated and discussed through numerical simulation in this study.
Medical-Grade Channel Access and Admission Control in 802.11e EDCA for Healthcare Applications
Son, Sunghwa; Park, Kyung-Joon; Park, Eun-Chan
2016-01-01
In this paper, we deal with the problem of assuring medical-grade quality of service (QoS) for real-time medical applications in wireless healthcare systems based on IEEE 802.11e. Firstly, we show that the differentiated channel access of IEEE 802.11e cannot effectively assure medical-grade QoS because of priority inversion. To resolve this problem, we propose an efficient channel access algorithm. The proposed algorithm adjusts arbitrary inter-frame space (AIFS) in the IEEE 802.11e protocol depending on the QoS measurement of medical traffic, to provide differentiated near-absolute priority for medical traffic. In addition, based on rigorous capacity analysis, we propose an admission control scheme that can avoid performance degradation due to network overload. Via extensive simulations, we show that the proposed mechanism strictly assures the medical-grade QoS and improves the throughput of low-priority traffic by more than several times compared to the conventional IEEE 802.11e. PMID:27490666
NASA Astrophysics Data System (ADS)
Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.
2015-12-01
The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Development of model reference adaptive control theory for electric power plant control applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabius, L.E.
1982-09-15
The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less
Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A
2014-07-01
The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100-400 ml). In general, for the small volume difference (100-150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.
Development and validation of an electronic phenotyping algorithm for chronic kidney disease
Nadkarni, Girish N; Gottesman, Omri; Linneman, James G; Chase, Herbert; Berg, Richard L; Farouk, Samira; Nadukuru, Rajiv; Lotay, Vaneet; Ellis, Steve; Hripcsak, George; Peissig, Peggy; Weng, Chunhua; Bottinger, Erwin P
2014-01-01
Twenty-six million Americans are estimated to have chronic kidney disease (CKD) with increased risk for cardiovascular disease and end stage renal disease. CKD is frequently undiagnosed and patients are unaware, hampering intervention. A tool for accurate and timely identification of CKD from electronic medical records (EMR) could improve healthcare quality and identify patients for research. As members of eMERGE (electronic medical records and genomics) Network, we developed an automated phenotyping algorithm that can be deployed to identify rapidly diabetic and/or hypertensive CKD cases and controls in health systems with EMRs It uses diagnostic codes, laboratory results, medication and blood pressure records, and textual information culled from notes. Validation statistics demonstrated positive predictive values of 96% and negative predictive values of 93.3. Similar results were obtained on implementation by two independent eMERGE member institutions. The algorithm dramatically outperformed identification by ICD-9-CM codes with 63% positive and 54% negative predictive values, respectively. PMID:25954398
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Space-time light field rendering.
Wang, Huamin; Sun, Mingxuan; Yang, Ruigang
2007-01-01
In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M
2017-01-01
Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572
Algorithm for automatic forced spirometry quality assessment: technological developments.
Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere
2014-01-01
We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
NASA Astrophysics Data System (ADS)
Zhang, Min; Yang, Feng; Zhang, Dongqing; Tang, Pengcheng
2018-02-01
A large number of electric vehicles are connected to the family micro grid will affect the operation safety of the power grid and the quality of power. Considering the factors of family micro grid price and electric vehicle as a distributed energy storage device, a two stage optimization model is established, and the improved discrete binary particle swarm optimization algorithm is used to optimize the parameters in the model. The proposed control strategy of electric vehicle charging and discharging is of practical significance for the rational control of electric vehicle as a distributed energy storage device and electric vehicle participating in the peak load regulation of power consumption.
NASA Astrophysics Data System (ADS)
Flach, Milan; Mahecha, Miguel; Gans, Fabian; Rodner, Erik; Bodesheim, Paul; Guanche-Garcia, Yanira; Brenning, Alexander; Denzler, Joachim; Reichstein, Markus
2016-04-01
The number of available Earth observations (EOs) is currently substantially increasing. Detecting anomalous patterns in these multivariate time series is an important step in identifying changes in the underlying dynamical system. Likewise, data quality issues might result in anomalous multivariate data constellations and have to be identified before corrupting subsequent analyses. In industrial application a common strategy is to monitor production chains with several sensors coupled to some statistical process control (SPC) algorithm. The basic idea is to raise an alarm when these sensor data depict some anomalous pattern according to the SPC, i.e. the production chain is considered 'out of control'. In fact, the industrial applications are conceptually similar to the on-line monitoring of EOs. However, algorithms used in the context of SPC or process monitoring are rarely considered for supervising multivariate spatio-temporal Earth observations. The objective of this study is to exploit the potential and transferability of SPC concepts to Earth system applications. We compare a range of different algorithms typically applied by SPC systems and evaluate their capability to detect e.g. known extreme events in land surface processes. Specifically two main issues are addressed: (1) identifying the most suitable combination of data pre-processing and detection algorithm for a specific type of event and (2) analyzing the limits of the individual approaches with respect to the magnitude, spatio-temporal size of the event as well as the data's signal to noise ratio. Extensive artificial data sets that represent the typical properties of Earth observations are used in this study. Our results show that the majority of the algorithms used can be considered for the detection of multivariate spatiotemporal events and directly transferred to real Earth observation data as currently assembled in different projects at the European scale, e.g. http://baci-h2020.eu/index.php/ and http://earthsystemdatacube.net/. Known anomalies such as the Russian heatwave are detected as well as anomalies which are not detectable with univariate methods.
Ranking online quality and reputation via the user activity
NASA Astrophysics Data System (ADS)
Liu, Xiao-Lu; Guo, Qiang; Hou, Lei; Cheng, Can; Liu, Jian-Guo
2015-10-01
How to design an accurate algorithm for ranking the object quality and user reputation is of importance for online rating systems. In this paper we present an improved iterative algorithm for online ranking object quality and user reputation in terms of the user degree (IRUA), where the user's reputation is measured by his/her rating vector, the corresponding objects' quality vector and the user degree. The experimental results for the empirical networks show that the AUC values of the IRUA algorithm can reach 0.9065 and 0.8705 in Movielens and Netflix data sets, respectively, which is better than the results generated by the traditional iterative ranking methods. Meanwhile, the results for the synthetic networks indicate that user degree should be considered in real rating systems due to users' rating behaviors. Moreover, we find that enhancing or reducing the influences of the large-degree users could produce more accurate reputation ranking lists.
Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca
2012-01-01
The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms.
The role of optical flow in automated quality assessment of full-motion video
NASA Astrophysics Data System (ADS)
Harguess, Josh; Shafer, Scott; Marez, Diego
2017-09-01
In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Operational Processing of Ground Validation Data for the Tropical Rainfall Measuring Mission
NASA Technical Reports Server (NTRS)
Kulie, Mark S.; Robinson, Mike; Marks, David A.; Ferrier, Brad S.; Rosenfeld, Danny; Wolff, David B.
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997. A primary goal of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented for this mission. A key component of GV is the analysis and quality control of meteorological ground-based radar data from four primary sites: Melbourne, FL; Houston, TX; Darwin, Australia; and Kwajalein Atoll, RMI. As part of the TRMM GV effort, the Joint Center for Earth Systems Technology (JCET) at the University of Maryland, Baltimore County, has been tasked with developing and implementing an operational system to quality control (QC), archive, and provide data for subsequent rainfall product generation from the four primary GV sites. This paper provides an overview of the JCET operational environment. A description of the QC algorithm and performance, in addition to the data flow procedure between JCET and the TRNM science and Data Information System (TSDIS), are presented. The impact of quality-controlled data on higher level rainfall and reflectivity products will also be addressed, Finally, a brief description of JCET's expanded role into producing reference rainfall products will be discussed.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.
Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos
2018-03-25
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO
2018-01-01
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392
NASA Astrophysics Data System (ADS)
Nasution, A. B.; Efendi, S.; Suwilo, S.
2018-04-01
The amount of data inserted in the form of audio samples that use 8 bits with LSB algorithm, affect the value of PSNR which resulted in changes in image quality of the insertion (fidelity). So in this research will be inserted audio samples using 5 bits with MLSB algorithm to reduce the number of data insertion where previously the audio sample will be compressed with Arithmetic Coding algorithm to reduce file size. In this research will also be encryption using Triple DES algorithm to better secure audio samples. The result of this research is the value of PSNR more than 50dB so it can be concluded that the image quality is still good because the value of PSNR has exceeded 40dB.
Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya
2013-03-01
The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.
A CNN based neurobiology inspired approach for retinal image quality assessment.
Mahapatra, Dwarikanath; Roy, Pallab K; Sedai, Suman; Garnavi, Rahil
2016-08-01
Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.
USDA-ARS?s Scientific Manuscript database
Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...
Duan, Li; Guo, Long; Liu, Ke; Liu, E-Hu; Li, Ping
2014-04-25
Citrus herbs have been widely used in traditional medicine and cuisine in China and other countries since the ancient time. However, the authentication and quality control of Citrus herbs has always been a challenging task due to their similar morphological characteristics and the diversity of the multi-components existed in the complicated matrix. In the present investigation, we developed a novel strategy to characterize and classify seven Citrus herbs based on chromatographic analysis and chemometric methods. Firstly, the chemical constituents in seven Citrus herbs were globally characterized by liquid chromatography combined with quadrupole time-of-flight mass spectrometry (LC-QTOF-MS). Based on their retention time, UV spectra and MS fragmentation behavior, a total of 75 compounds were identified or tentatively characterized in these herbal medicines. Secondly, a segmental monitoring method based on LC-variable wavelength detection was developed for simultaneous quantification of ten marker compounds in these Citrus herbs. Thirdly, based on the contents of the ten analytes, genetic algorithm optimized support vector machines (GA-SVM) was employed to differentiate and classify the 64 samples covering these seven herbs. The obtained classifier showed good prediction performance and the overall prediction accuracy reached 96.88%. The proposed strategy is expected to provide new insight for authentication and quality control of traditional herbs. Copyright © 2014 Elsevier B.V. All rights reserved.
Clustering and Flow Conservation Monitoring Tool for Software Defined Networks
Puente Fernández, Jesús Antonio
2018-01-01
Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches. PMID:29614049
Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan
2015-02-05
Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its limitations. Analysis of underlying discrepancies provides the ability to improve algorithm performance and facilitate improved data quality.
Lehmann, Ronny; Thiessen, Christiane; Frick, Barbara; Bosse, Hans Martin; Nikendei, Christoph; Hoffmann, Georg Friedrich; Tönshoff, Burkhard; Huwendiek, Sören
2015-07-02
E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.
Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan
2014-01-01
Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.