Visual saliency-based fast intracoding algorithm for high efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin
2017-01-01
Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
NASA Astrophysics Data System (ADS)
Li, Jiao; Hu, Guijun; Gong, Caili; Li, Li
2018-02-01
In this paper, we propose a hybrid time-frequency domain sign-sign joint decision multimodulus algorithm (Hybrid-SJDMMA) for mode-demultiplexing in a 6 × 6 mode division multiplexing (MDM) system with high-order QAM modulation. The equalization performance of Hybrid-SJDMMA was evaluated and compared with the frequency domain multimodulus algorithm (FD-MMA) and the hybrid time-frequency domain sign-sign multimodulus algorithm (Hybrid-SMMA). Simulation results revealed that Hybrid-SJDMMA exhibits a significantly lower computational complexity than FD-MMA, and its convergence speed is similar to that of FD-MMA. Additionally, the bit-error-rate performance of Hybrid-SJDMMA was obviously better than FD-MMA and Hybrid-SMMA for 16 QAM and 64 QAM.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
NASA Astrophysics Data System (ADS)
Li, Mian-Shiuan; Chen, Mei-Juan; Tai, Kuang-Han; Sue, Kuen-Liang
2013-12-01
This article proposes a fast mode decision algorithm based on the correlation of the just-noticeable-difference (JND) and the rate distortion cost (RD cost) to reduce the computational complexity of H.264/AVC. First, the relationship between the average RD cost and the number of JND pixels is established by Gaussian distributions. Thus, the RD cost of the Inter 16 × 16 mode is compared with the predicted thresholds from these models for fast mode selection. In addition, we use the image content, the residual data, and JND visual model for horizontal/vertical detection, and then utilize the result to predict the partition in a macroblock. From the experimental results, a greater time saving can be achieved while the proposed algorithm also maintains performance and quality effectively.
NASA Astrophysics Data System (ADS)
Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir
2015-11-01
The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.
Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P
2009-01-01
Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Demultiplexing based on frequency-domain joint decision MMA for MDM system
NASA Astrophysics Data System (ADS)
Caili, Gong; Li, Li; Guijun, Hu
2016-06-01
In this paper, we propose a demultiplexing method based on frequency-domain joint decision multi-modulus algorithm (FD-JDMMA) for mode division multiplexing (MDM) system. The performance of FD-JDMMA is compared with frequency-domain multi-modulus algorithm (FD-MMA) and frequency-domain least mean square (FD-LMS) algorithm. The simulation results show that FD-JDMMA outperforms FD-MMA in terms of BER and convergence speed in the cases of mQAM (m=4, 16 and 64) formats. And it is also demonstrated that FD-JDMMA achieves better BER performance and converges faster than FD-LMS in the cases of 16QAM and 64QAM. Furthermore, FD-JDMMA maintains similar computational complexity as the both equalization algorithms.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
2016-01-01
This paper presents an algorithm, for use with a Portable Powered Ankle-Foot Orthosis (i.e., PPAFO) that can automatically detect changes in gait modes (level ground, ascent and descent of stairs or ramps), thus allowing for appropriate ankle actuation control during swing phase. An artificial neural network (ANN) algorithm used input signals from an inertial measurement unit and foot switches, that is, vertical velocity and segment angle of the foot. Output from the ANN was filtered and adjusted to generate a final data set used to classify different gait modes. Five healthy male subjects walked with the PPAFO on the right leg for two test scenarios (walking over level ground and up and down stairs or a ramp; three trials per scenario). Success rate was quantified by the number of correctly classified steps with respect to the total number of steps. The results indicated that the proposed algorithm's success rate was high (99.3%, 100%, and 98.3% for level, ascent, and descent modes in the stairs scenario, respectively; 98.9%, 97.8%, and 100% in the ramp scenario). The proposed algorithm continuously detected each step's gait mode with faster timing and higher accuracy compared to a previous algorithm that used a decision tree based on maximizing the reliability of the mode recognition. PMID:28070188
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
NASA Astrophysics Data System (ADS)
da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.
2014-05-01
Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Intelligent Medical Systems for Aerospace Emergency Medical Services
NASA Technical Reports Server (NTRS)
Epler, John; Zimmer, Gary
2004-01-01
The purpose of this project is to develop a portable, hands free device for emergency medical decision support to be used in remote or confined settings by non-physician providers. Phase I of the project will entail the development of a voice-activated device that will utilize an intelligent algorithm to provide guidance in establishing an airway in an emergency situation. The interactive, hands free software will process requests for assistance based on verbal prompts and algorithmic decision-making. The device will allow the CMO to attend to the patient while receiving verbal instruction. The software will also feature graphic representations where it is felt helpful in aiding in procedures. We will also develop a training program to orient users to the algorithmic approach, the use of the hardware and specific procedural considerations. We will validate the efficacy of this mode of technology application by testing in the Johns Hopkins Department of Emergency Medicine. Phase I of the project will focus on the validation of the proposed algorithm, testing and validation of the decision making tool and modifications of medical equipment. In Phase 11, we will produce the first generation software for hands-free, interactive medical decision making for use in acute care environments.
Algorithm for Determination of Orion Ascent Abort Mode Achievability
NASA Technical Reports Server (NTRS)
Tedesco, Mark B.
2011-01-01
For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.
An overview of the essential differences and similarities of system identification techniques
NASA Technical Reports Server (NTRS)
Mehra, Raman K.
1991-01-01
Information is given in the form of outlines, graphs, tables and charts. Topics include system identification, Bayesian statistical decision theory, Maximum Likelihood Estimation, identification methods, structural mode identification using a stochastic realization algorithm, and identification results regarding membrane simulations and X-29 flutter flight test data.
Decision algorithm for data center vortex beam receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2017-12-01
We present a new scheme for a vortex beam communications system which exploits the radial component p of Laguerre-Gauss modes in addition to the azimuthal component l generally used. We derive a new encoding algorithm which makes use of the spatial distribution of intensity to create an alphabet dictionary for communication. We suggest an application of the scheme as part of an optical wireless link for intra data center communication. We investigate the probability of error in decoding, for several detector options.
Transportation Modes Classification Using Sensors on Smartphones.
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-08-19
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.
Transportation Modes Classification Using Sensors on Smartphones
Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu
2016-01-01
This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes. PMID:27548182
Direct adaptive performance optimization of subsonic transports: A periodic perturbation technique
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn
1995-01-01
Aircraft performance can be optimized at the flight condition by using available redundancy among actuators. Effective use of this potential allows improved performance beyond limits imposed by design compromises. Optimization based on nominal models does not result in the best performance of the actual aircraft at the actual flight condition. An adaptive algorithm for optimizing performance parameters, such as speed or fuel flow, in flight based exclusively on flight data is proposed. The algorithm is inherently insensitive to model inaccuracies and measurement noise and biases and can optimize several decision variables at the same time. An adaptive constraint controller integrated into the algorithm regulates the optimization constraints, such as altitude or speed, without requiring and prior knowledge of the autopilot design. The algorithm has a modular structure which allows easy incorporation (or removal) of optimization constraints or decision variables to the optimization problem. An important part of the contribution is the development of analytical tools enabling convergence analysis of the algorithm and the establishment of simple design rules. The fuel-flow minimization and velocity maximization modes of the algorithm are demonstrated on the NASA Dryden B-720 nonlinear flight simulator for the single- and multi-effector optimization cases.
Stitzel, Joel D; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Schoell, Samantha L; Doud, Andrea N; Martin, R Shayn; Meredith, J Wayne
2016-06-01
Advanced Automatic Crash Notification algorithms use vehicle telemetry measurements to predict risk of serious motor vehicle crash injury. The objective of the study was to develop an Advanced Automatic Crash Notification algorithm to reduce response time, increase triage efficiency, and improve patient outcomes by minimizing undertriage (<5%) and overtriage (<50%), as recommended by the American College of Surgeons. A list of injuries associated with a patient's need for Level I/II trauma center treatment known as the Target Injury List was determined using an approach based on 3 facets of injury: severity, time sensitivity, and predictability. Multivariable logistic regression was used to predict an occupant's risk of sustaining an injury on the Target Injury List based on crash severity and restraint factors for occupants in the National Automotive Sampling System - Crashworthiness Data System 2000-2011. The Advanced Automatic Crash Notification algorithm was optimized and evaluated to minimize triage rates, per American College of Surgeons recommendations. The following rates were achieved: <50% overtriage and <5% undertriage in side impacts and 6% to 16% undertriage in other crash modes. Nationwide implementation of our algorithm is estimated to improve triage decisions for 44% of undertriaged and 38% of overtriaged occupants. Annually, this translates to more appropriate care for >2,700 seriously injured occupants and reduces unnecessary use of trauma center resources for >162,000 minimally injured occupants. The algorithm could be incorporated into vehicles to inform emergency personnel of recommended motor vehicle crash triage decisions. Lower under- and overtriage was achieved, and nationwide implementation of the algorithm would yield improved triage decision making for an estimated 165,000 occupants annually. Copyright © 2016. Published by Elsevier Inc.
Li, Kejia; Warren, Steve; Natarajan, Balasubramaniam
2012-02-01
Onboard assessment of photoplethysmogram (PPG) quality could reduce unnecessary data transmission on battery-powered wireless pulse oximeters and improve the viability of the electronic patient records to which these data are stored. These algorithms show promise to increase the intelligence level of former "dumb" medical devices: devices that acquire and forward data but leave data interpretation to the clinician or host system. To this end, the authors have developed a unique onboard feature detection algorithm to assess the quality of PPGs acquired with a custom reflectance mode, wireless pulse oximeter. The algorithm uses a Bayesian hypothesis testing method to analyze four features extracted from raw and decimated PPG data in order to determine whether the original data comprise valid PPG waveforms or whether they are corrupted by motion or other environmental influences. Based on these results, the algorithm further calculates heart rate and blood oxygen saturation from a "compact representation" structure. PPG data were collected from 47 subjects to train the feature detection algorithm and to gauge their performance. A MATLAB interface was also developed to visualize the features extracted, the algorithm flow, and the decision results, where all algorithm-related parameters and decisions were ascertained on the wireless unit prior to transmission. For the data sets acquired here, the algorithm was 99% effective in identifying clean, usable PPGs versus nonsaturated data that did not demonstrate meaningful pulsatile waveshapes, PPGs corrupted by motion artifact, and data affected by signal saturation.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution
NASA Technical Reports Server (NTRS)
Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria
2009-01-01
The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.
Algorithms and Results of Eye Tissues Differentiation Based on RF Ultrasound
Jurkonis, R.; Janušauskas, A.; Marozas, V.; Jegelevičius, D.; Daukantas, S.; Patašius, M.; Paunksnis, A.; Lukoševičius, A.
2012-01-01
Algorithms and software were developed for analysis of B-scan ultrasonic signals acquired from commercial diagnostic ultrasound system. The algorithms process raw ultrasonic signals in backscattered spectrum domain, which is obtained using two time-frequency methods: short-time Fourier and Hilbert-Huang transformations. The signals from selected regions of eye tissues are characterized by parameters: B-scan envelope amplitude, approximated spectral slope, approximated spectral intercept, mean instantaneous frequency, mean instantaneous bandwidth, and parameters of Nakagami distribution characterizing Hilbert-Huang transformation output. The backscattered ultrasound signal parameters characterizing intraocular and orbit tissues were processed by decision tree data mining algorithm. The pilot trial proved that applied methods are able to correctly classify signals from corpus vitreum blood, extraocular muscle, and orbit tissues. In 26 cases of ocular tissues classification, one error occurred, when tissues were classified into classes of corpus vitreum blood, extraocular muscle, and orbit tissue. In this pilot classification parameters of spectral intercept and Nakagami parameter for instantaneous frequencies distribution of the 1st intrinsic mode function were found specific for corpus vitreum blood, orbit and extraocular muscle tissues. We conclude that ultrasound data should be further collected in clinical database to establish background for decision support system for ocular tissue noninvasive differentiation. PMID:22654643
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pullum, Laura L; Symons, Christopher T
2011-01-01
Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learningmore » system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.« less
Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing
2015-01-01
A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.
Energy-saving EPON Bandwidth Allocation Algorithm Supporting ONU's Sleep Mode
NASA Astrophysics Data System (ADS)
Zhang, Yinfa; Ren, Shuai; Liao, Xiaomin; Fang, Yuanyuan
2014-09-01
A new bandwidth allocation algorithm was presented by combining merits of the IPACT algorithm and the cyclic DBA algorithm based on the DBA algorithm for ONU's sleep mode. Simulation results indicate that compared with the normal mode ONU, the ONU's sleep mode can save about 74% of energy. The new algorithm has a smaller average packet delay and queue length in the upstream direction. While in the downstream direction, the average packet delay of the new algorithm is less than polling cycle Tcycle and the average queue length is less than the product of Tcycle and the maximum link rate. The new algorithm achieves a better compromise between energy-saving and ensuring quality of service.
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.
2018-01-30
algorithms. Due to this, Fusion was built with the goal of extensibility throughout the architecture. The Fusion infrastructure enables software...DISTRIBUTION STATEMENT A: Approved for public release. Cleared, 88PA, Case# 2018-0820. b. Trigger a Highly Mobile ...modes were developed in IMPACT (i.e., normal full coverage patrol (NFCP) and highly mobile (HM)). In both NFCP and HM, all UxVs patrol their assigned
New mode switching algorithm for the JPL 70-meter antenna servo controller
NASA Technical Reports Server (NTRS)
Nickerson, J. A.
1988-01-01
The design of control mode switching algorithms and logic for JPL's 70 m antenna servo controller are described. The old control mode switching logic was reviewed and perturbation problems were identified. Design approaches for mode switching are presented and the final design is described. Simulations used to compare old and new mode switching algorithms and logic show that the new mode switching techniques will significantly reduce perturbation problems.
Performance evaluations of demons and free form deformation algorithms for the liver region.
Wang, Hui; Gong, Guanzhong; Wang, Hongjun; Li, Dengwang; Yin, Yong; Lu, Jie
2014-04-01
We investigated the influence of breathing motion on radiation therapy according to four- dimensional computed tomography (4D-CT) technology and indicated the registration of 4D-CT images was significant. The demons algorithm in two interpolation modes was compared to the FFD model algorithm to register the different phase images of 4D-CT in tumor tracking, using iodipin as verification. Linear interpolation was used in both mode 1 and mode 2. Mode 1 set outside pixels to nearest pixel, while mode 2 set outside pixels to zero. We used normalized mutual information (NMI), sum of squared differences, modified Hausdorff-distance, and registration speed to evaluate the performance of each algorithm. The average NMI after demons registration method in mode 1 improved 1.76% and 4.75% when compared to mode 2 and FFD model algorithm, respectively. Further, the modified Hausdorff-distance was no different between demons modes 1 and 2, but mode 1 was 15.2% lower than FFD. Finally, demons algorithm has the absolute advantage in registration speed. The demons algorithm in mode 1 was therefore found to be much more suitable for the registration of 4D-CT images. The subtractions of floating images and reference image before and after registration by demons further verified that influence of breathing motion cannot be ignored and the demons registration method is feasible.
EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal
NASA Astrophysics Data System (ADS)
Chen, Yong; Wu, Chun-ting; Liu, Huan-lin
2017-07-01
Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Identification and Reconfigurable Control of Impaired Multi-Rotor Drones
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Bencomo, Alfredo
2016-01-01
The paper presents an algorithm for control and safe landing of impaired multi-rotor drones when one or more motors fail simultaneously or in any sequence. It includes three main components: an identification block, a reconfigurable control block, and a decisions making block. The identification block monitors each motor load characteristics and the current drawn, based on which the failures are detected. The control block generates the required total thrust and three axis torques for the altitude, horizontal position and/or orientation control of the drone based on the time scale separation and nonlinear dynamic inversion. The horizontal displacement is controlled by modulating the roll and pitch angles. The decision making algorithm maps the total thrust and three torques into the individual motor thrusts based on the information provided by the identification block. The drone continues the mission execution as long as the number of functioning motors provide controllability of it. Otherwise, the controller is switched to the safe mode, which gives up the yaw control, commands a safe landing spot and descent rate while maintaining the horizontal attitude.
A Novel Artificial Bee Colony Based Clustering Algorithm for Categorical Data
2015-01-01
Data with categorical attributes are ubiquitous in the real world. However, existing partitional clustering algorithms for categorical data are prone to fall into local optima. To address this issue, in this paper we propose a novel clustering algorithm, ABC-K-Modes (Artificial Bee Colony clustering based on K-Modes), based on the traditional k-modes clustering algorithm and the artificial bee colony approach. In our approach, we first introduce a one-step k-modes procedure, and then integrate this procedure with the artificial bee colony approach to deal with categorical data. In the search process performed by scout bees, we adopt the multi-source search inspired by the idea of batch processing to accelerate the convergence of ABC-K-Modes. The performance of ABC-K-Modes is evaluated by a series of experiments in comparison with that of the other popular algorithms for categorical data. PMID:25993469
A novel artificial bee colony based clustering algorithm for categorical data.
Ji, Jinchao; Pang, Wei; Zheng, Yanlin; Wang, Zhe; Ma, Zhiqiang
2015-01-01
Data with categorical attributes are ubiquitous in the real world. However, existing partitional clustering algorithms for categorical data are prone to fall into local optima. To address this issue, in this paper we propose a novel clustering algorithm, ABC-K-Modes (Artificial Bee Colony clustering based on K-Modes), based on the traditional k-modes clustering algorithm and the artificial bee colony approach. In our approach, we first introduce a one-step k-modes procedure, and then integrate this procedure with the artificial bee colony approach to deal with categorical data. In the search process performed by scout bees, we adopt the multi-source search inspired by the idea of batch processing to accelerate the convergence of ABC-K-Modes. The performance of ABC-K-Modes is evaluated by a series of experiments in comparison with that of the other popular algorithms for categorical data.
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network
Abdullah, Radhwan Mohamed; Zukarnain, Zuriati Ahmad
2017-01-01
Transferring a huge amount of data between different network locations over the network links depends on the network’s traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model. PMID:28708067
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
Screening Algorithm to Guide Decisions on Whether to Conduct a Health Impact Assessment
Provides a visual aid in the form of a decision algorithm that helps guide discussions about whether to proceed with an HIA. The algorithm can help structure, standardize, and document the decision process.
Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.
Shen, Shijian; Nie, Xin; Zhang, Xinggan
2018-02-03
Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.
An Augmentation of G-Guidance Algorithms
NASA Technical Reports Server (NTRS)
Carson, John M. III; Acikmese, Behcet
2011-01-01
The original G-Guidance algorithm provided an autonomous guidance and control policy for small-body proximity operations that took into account uncertainty and dynamics disturbances. However, there was a lack of robustness in regards to object proximity while in autonomous mode. The modified GGuidance algorithm was augmented with a second operational mode that allows switching into a safety hover mode. This will cause a spacecraft to hover in place until a mission-planning algorithm can compute a safe new trajectory. No state or control constraints are violated. When a new, feasible state trajectory is calculated, the spacecraft will return to standard mode and maneuver toward the target. The main goal of this augmentation is to protect the spacecraft in the event that a landing surface or obstacle is closer or further than anticipated. The algorithm can be used for the mitigation of any unexpected trajectory or state changes that occur during standard mode operations
FaStore - a space-saving solution for raw sequencing data.
Roguski, Lukasz; Ochoa, Idoia; Hernaez, Mikel; Deorowicz, Sebastian
2018-03-29
The affordability of DNA sequencing has led to the generation of unprecedented volumes of raw sequencing data. These data must be stored, processed, and transmitted, which poses significant challenges. To facilitate this effort, we introduce FaStore, a specialized compressor for FASTQ files. FaStore does not use any reference sequences for compression, and permits the user to choose from several lossy modes to improve the overall compression ratio, depending on the specific needs. FaStore in the lossless mode achieves a significant improvement in compression ratio with respect to previously proposed algorithms. We perform an analysis on the effect that the different lossy modes have on variant calling, the most widely used application for clinical decision making, especially important in the era of precision medicine. We show that lossy compression can offer significant compression gains, while preserving the essential genomic information and without affecting the variant calling performance. FaStore can be downloaded from https://github.com/refresh-bio/FaStore. sebastian.deorowicz@polsl.pl. Supplementary data are available at Bioinformatics online.
Decision tree and ensemble learning algorithms with their applications in bioinformatics.
Che, Dongsheng; Liu, Qi; Rasheed, Khaled; Tao, Xiuping
2011-01-01
Machine learning approaches have wide applications in bioinformatics, and decision tree is one of the successful approaches applied in this field. In this chapter, we briefly review decision tree and related ensemble algorithms and show the successful applications of such approaches on solving biological problems. We hope that by learning the algorithms of decision trees and ensemble classifiers, biologists can get the basic ideas of how machine learning algorithms work. On the other hand, by being exposed to the applications of decision trees and ensemble algorithms in bioinformatics, computer scientists can get better ideas of which bioinformatics topics they may work on in their future research directions. We aim to provide a platform to bridge the gap between biologists and computer scientists.
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Backup Attitude Control Algorithms for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael
1999-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.
PRF Ambiguity Detrmination for Radarsat ScanSAR System
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1998-01-01
PRF ambiguity is a potential problem for a spaceborne SAR operated at high frequencies. For a strip mode SAR, there were several approaches to solve this problem. This paper, however, addresses PRF ambiguity determination algorithms suitable for a burst mode SAR system such as the Radarsat ScanSAR. The candidate algorithms include the wavelength diversity algorithm, range look cross correlation algorithm, and multi-PRF algorithm.
MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system
NASA Astrophysics Data System (ADS)
Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.
Framework for a space shuttle main engine health monitoring system
NASA Technical Reports Server (NTRS)
Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey
1990-01-01
A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.
Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree
NASA Astrophysics Data System (ADS)
Kim, Jong Kyu; Kim, Nam Soo
In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi
2017-01-01
Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
NASA Astrophysics Data System (ADS)
Amphawan, Angela; Ghazi, Alaan; Al-dawoodi, Aras
2017-11-01
A free-space optics mode-wavelength division multiplexing (MWDM) system using Laguerre-Gaussian (LG) modes is designed using decision feedback equalization for controlling mode coupling and combating inter symbol interference so as to increase channel diversity. In this paper, a data rate of 24 Gbps is achieved for a FSO MWDM channel of 2.6 km in length using feedback equalization. Simulation results show significant improvement in eye diagrams and bit-error rates before and after decision feedback equalization.
Optimal Control of a Surge-Mode WEC in Random Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertok, Allan; Ceberio, Olivier; Staby, Bill
2016-08-30
The objective of this project was to develop one or more real-time feedback and feed-forward (MPC) control algorithms for an Oscillating Surge Wave Converter (OSWC) developed by RME called SurgeWEC™ that leverages recent innovations in wave energy converter (WEC) control theory to maximize power production in random wave environments. The control algorithms synthesized innovations in dynamic programming and nonlinear wave dynamics using anticipatory wave sensors and localized sensor measurements; e.g. position and velocity of the WEC Power Take Off (PTO), with predictive wave forecasting data. The result was an advanced control system that uses feedback or feed-forward data from anmore » array of sensor channels comprised of both localized and deployed sensors fused into a single decision process that optimally compensates for uncertainties in the system dynamics, wave forecasts, and sensor measurement errors.« less
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
Algorithms for optimizing the treatment of depression: making the right decision at the right time.
Adli, M; Rush, A J; Möller, H-J; Bauer, M
2003-11-01
Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.
Health management system for rocket engines
NASA Technical Reports Server (NTRS)
Nemeth, Edward
1990-01-01
The functional framework of a failure detection algorithm for the Space Shuttle Main Engine (SSME) is developed. The basic algorithm is based only on existing SSME measurements. Supplemental measurements, expected to enhance failure detection effectiveness, are identified. To support the algorithm development, a figure of merit is defined to estimate the likelihood of SSME criticality 1 failure modes and the failure modes are ranked in order of likelihood of occurrence. Nine classes of failure detection strategies are evaluated and promising features are extracted as the basis for the failure detection algorithm. The failure detection algorithm provides early warning capabilities for a wide variety of SSME failure modes. Preliminary algorithm evaluation, using data from three SSME failures representing three different failure types, demonstrated indications of imminent catastrophic failure well in advance of redline cutoff in all three cases.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
Real Time Coincidence Processing Algorithm for Geiger Mode LADAR using FPGAs
2017-01-09
Defense for Research and Engineering. Real Time Coincidence Processing Algorithm for Geiger-Mode Ladar using FPGAs Rufo A. Antonio1, Alexandru N...the first ever Geiger-mode ladar processing al- gorithm that is suitable for implementation on an FPGA enabling real time pro- cessing and data...developed embedded FPGA real time processing algorithms that take noisy raw data, streaming at upwards of 1GB/sec, and filters the data to obtain a near- ly
Pitting intuitive and analytical thinking against each other: the case of transitivity.
Rusou, Zohar; Zakay, Dan; Usher, Marius
2013-06-01
Identifying which thinking mode, intuitive or analytical, yields better decisions has been a major subject of inquiry by decision-making researchers. Yet studies show contradictory results. One possibility is that the ambiguity is due to the variability in experimental conditions across studies. Our hypothesis is that decision quality depends critically on the level of compatibility between the thinking mode employed in the decision and the nature of the decision-making task. In two experiments, we pitted intuition and analytical thinking against each other on tasks that were either mainly intuitive or mainly analytical. Thinking modes, as well as task characteristics, were manipulated in a factorial design, with choice transitivity as the dependent measure. Results showed higher choice consistency (transitivity) when thinking mode and the characteristics of the decision task were compatible.
A hierarchical framework for air traffic control
NASA Astrophysics Data System (ADS)
Roy, Kaushik
Air travel in recent years has been plagued by record delays, with over $8 billion in direct operating costs being attributed to 100 million flight delay minutes in 2007. Major contributing factors to delay include weather, congestion, and aging infrastructure; the Next Generation Air Transportation System (NextGen) aims to alleviate these delays through an upgrade of the air traffic control system. Changes to large-scale networked systems such as air traffic control are complicated by the need for coordinated solutions over disparate temporal and spatial scales. Individual air traffic controllers must ensure aircraft maintain safe separation locally with a time horizon of seconds to minutes, whereas regional plans are formulated to efficiently route flows of aircraft around weather and congestion on the order of every hour. More efficient control algorithms that provide a coordinated solution are required to safely handle a larger number of aircraft in a fixed amount of airspace. Improved estimation algorithms are also needed to provide accurate aircraft state information and situational awareness for human controllers. A hierarchical framework is developed to simultaneously solve the sometimes conflicting goals of regional efficiency and local safety. Careful attention is given in defining the interactions between the layers of this hierarchy. In this way, solutions to individual air traffic problems can be targeted and implemented as needed. First, the regional traffic flow management problem is posed as an optimization problem and shown to be NP-Hard. Approximation methods based on aggregate flow models are developed to enable real-time implementation of algorithms that reduce the impact of congestion and adverse weather. Second, the local trajectory design problem is solved using a novel slot-based sector model. This model is used to analyze sector capacity under varying traffic patterns, providing a more comprehensive understanding of how increased automation in NextGen will affect the overall performance of air traffic control. The dissertation also provides solutions to several key estimation problems that support corresponding control tasks. Throughout the development of these estimation algorithms, aircraft motion is modeled using hybrid systems, which encapsulate both the discrete flight mode of an aircraft and the evolution of continuous states such as position and velocity. The target-tracking problem is posed as one of hybrid state estimation, and two new algorithms are developed to exploit structure specific to aircraft motion, especially near airports. First, discrete mode evolution is modeled using state-dependent transitions, in which the likelihood of changing flight modes is dependent on aircraft state. Second, an estimator is designed for systems with limited mode changes, including arrival aircraft. Improved target tracking facilitates increased safety in collision avoidance and trajectory design problems. A multiple-target tracking and identity management algorithm is developed to improve situational awareness for controllers about multiple maneuvering targets in a congested region. Finally, tracking algorithms are extended to predict aircraft landing times; estimated time of arrival prediction is one example of important decision support information for air traffic control.
Decision Aids for Naval Air ASW
1980-03-15
Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A
Out-of-Home Placement Decision-Making and Outcomes in Child Welfare: A Longitudinal Study
McClelland, Gary M.; Weiner, Dana A.; Jordan, Neil; Lyons, John S.
2015-01-01
After children enter the child welfare system, subsequent out-of-home placement decisions and their impact on children’s well-being are complex and under-researched. This study examined two placement decision-making models: a multidisciplinary team approach, and a decision support algorithm using a standardized assessment. Based on 3,911 placement records in the Illinois child welfare system over 4 years, concordant (agreement) and discordant (disagreement) decisions between the two models were compared. Concordant decisions consistently predicted improvement in children’s well-being regardless of placement type. Discordant decisions showed greater variability. In general, placing children in settings less restrictive than the algorithm suggested (“under-placing”) was associated with less severe baseline functioning but also less improvement over time than placing children according to the algorithm. “Over-placing” children in settings more restrictive than the algorithm recommended was associated with more severe baseline functioning but fewer significant results in rate of improvement than predicted by concordant decisions. The importance of placement decision-making on policy, restrictiveness of placement, and delivery of treatments and services in child welfare are discussed. PMID:24677172
IND - THE IND DECISION TREE PACKAGE
NASA Technical Reports Server (NTRS)
Buntine, W.
1994-01-01
A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which has good prediction of classes on new data. Standard algorithms are CART (by Breiman Friedman, Olshen and Stone) and ID3 and its successor C4 (by Quinlan). As well as reimplementing parts of these algorithms and offering experimental control suites, IND also introduces Bayesian and MML methods and more sophisticated search in growing trees. These produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or it may be omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. IND can be operated in a CART-like mode (but without regression trees, surrogate splits or multivariate splits), and in a mode like the early version of C4. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consists of four basic kinds of routines: data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The generation routines are used to build classifiers. The test routines are used to evaluate classifiers and to classify data using a classifier. And the display routines are used to display classifiers in various formats. IND is written in C-language for Sun4 series computers. It consists of several programs with controlling shell scripts. Extensive UNIX man entries are included. IND is designed to be used on any UNIX system, although it has only been thoroughly tested on SUN platforms. The standard distribution medium for IND is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in PostScript format is included on the distribution medium. IND was developed in 1992.
Modeling and Evaluation of Miles-in-Trail Restrictions in the National Air Space
NASA Technical Reports Server (NTRS)
Grabbe, Shon; Sridhar, Banavar
2003-01-01
Miles-in-trail restrictions impact flights in the national air space on a daily basis and these restrictions routinely propagate between adjacent Air Route Traffic Control Centers. Since overly restrictive or ineffective miles-in-trail restrictions can reduce the overall efficiency of the national air space, decision support capabilities that model miles-in-trail restrictions should prove to be very beneficial. This paper presents both an analytical formulation and a linear programming approach for modeling the effects of miles-in-trail restrictions. A methodology for monitoring the conformance of an existing miles-in-trail restriction is also presented. These capabilities have been implemented in the Future ATM Concepts Evaluation Tool for testing purposes. To allow alternative restrictions to be evaluated in post-operations, a new mode of operation, which is referred to as the hybrid-playback mode, has been implemented in the simulation environment. To demonstrate the capabilities of these new algorithms, the miles-in-trail restrictions, which were in effect on June 27, 2002 in the New York Terminal Radar Approach Control, are examined. Results from the miles-in-trail conformance monitoring functionality are presented for the ELIOT, PARKE and WHITE departure fixes. In addition, the miles-in-trail algorithms are used to assess the impact of alternative restrictions at the PARKE departure fix.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Adhikary, Nabanita; Mahanta, Chitralekha
2013-11-01
In this paper an integral backstepping sliding mode controller is proposed for controlling underactuated systems. A feedback control law is designed based on backstepping algorithm and a sliding surface is introduced in the final stage of the algorithm. The backstepping algorithm makes the controller immune to matched and mismatched uncertainties and the sliding mode control provides robustness. The proposed controller ensures asymptotic stability. The effectiveness of the proposed controller is compared against a coupled sliding mode controller for swing-up and stabilization of the Cart-Pendulum System. Simulation results show that the proposed integral backstepping sliding mode controller is able to reject both matched and mismatched uncertainties with a chattering free control law, while utilizing less control effort than the sliding mode controller. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds
NASA Astrophysics Data System (ADS)
Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert
2014-06-01
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
Parallelization of Nullspace Algorithm for the computation of metabolic pathways
Jevremović, Dimitrije; Trinh, Cong T.; Srienc, Friedrich; Sosa, Carlos P.; Boley, Daniel
2011-01-01
Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100–1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency. PMID:22058581
Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.
Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto
2012-11-21
This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data
2012-01-01
Background This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor. PMID:23171000
2014-01-01
This paper analyses how different coordination modes and different multiobjective decision making approaches interfere with each other in hierarchical organizations. The investigation is based on an agent-based simulation. We apply a modified NK-model in which we map multiobjective decision making as adaptive walk on multiple performance landscapes, whereby each landscape represents one objective. We find that the impact of the coordination mode on the performance and the speed of performance improvement is critically affected by the selected multiobjective decision making approach. In certain setups, the performances achieved with the more complex multiobjective decision making approaches turn out to be less sensitive to the coordination mode than the performances achieved with the less complex multiobjective decision making approaches. Furthermore, we present results on the impact of the nature of interactions among decisions on the achieved performance in multiobjective setups. Our results give guidance on how to control the performance contribution of objectives to overall performance and answer the question how effective certain multiobjective decision making approaches perform under certain circumstances (coordination mode and interdependencies among decisions). PMID:25152926
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662
Doubravsky, Karel; Dohnal, Mirko
2015-01-01
Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, S; Suh, T; Chung, J
Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUsmore » and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.« less
Image processing improvement for optical observations of space debris with the TAROT telescopes
NASA Astrophysics Data System (ADS)
Thiebaut, C.; Theron, S.; Richard, P.; Blanchet, G.; Klotz, A.; Boër, M.
2016-07-01
CNES is involved in the Inter-Agency Space Debris Coordination Committee (IADC) and is observing space debris with two robotic ground based fully automated telescopes called TAROT and operated by the CNRS. An image processing algorithm devoted to debris detection in geostationary orbit is implemented in the standard pipeline. Nevertheless, this algorithm is unable to deal with debris tracking mode images, this mode being the preferred one for debris detectability. We present an algorithm improvement for this mode and give results in terms of false detection rate.
NASA Astrophysics Data System (ADS)
Arafat, Md Nayeem
Distributed generation systems (DGs) have been penetrating into our energy networks with the advancement in the renewable energy sources and energy storage elements. These systems can operate in synchronism with the utility grid referred to as the grid connected (GC) mode of operation, or work independently, referred to as the standalone (SA) mode of operation. There is a need to ensure continuous power flow during transition between GC and SA modes, referred to as the transition mode, in operating DGs. In this dissertation, efficient and effective transition control algorithms are developed for DGs operating either independently or collectively with other units. Three techniques are proposed in this dissertation to manage the proper transition operations. In the first technique, a new control algorithm is proposed for an independent DG which can operate in SA and GC modes. The proposed transition control algorithm ensures low total harmonic distortion (THD) and less voltage fluctuation during mode transitions compared to the other techniques. In the second technique, a transition control is suggested for a collective of DGs operating in a microgrid system architecture to improve the reliability of the system, reduce the cost, and provide better performance. In this technique, one of the DGs in a microgrid system, referred to as a dispatch unit , takes the additional responsibility of mode transitioning to ensure smooth transition and supply/demand balance in the microgrid. In the third technique, an alternative transition technique is proposed through hybridizing the current and droop controllers. The proposed hybrid transition control technique has higher reliability compared to the dispatch unit concept. During the GC mode, the proposed hybrid controller uses current control. During the SA mode, the hybrid controller uses droop control. During the transition mode, both of the controllers participate in formulating the inverter output voltage but with different weights or coefficients. Voltage source inverters interfacing the DGs as well as the proposed transition control algorithms have been modeled to analyze the stability of the algorithms in different configurations. The performances of the proposed algorithms are verified through simulation and experimental studies. It has been found that the proposed control techniques can provide smooth power flow to the local loads during the GC, SA and transition modes.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Simmon, D. A.
1985-01-01
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.; Vicroy, D.D.; Simmon, D.A.
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, andmore » nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.« less
A Modular Low-Complexity ECG Delineation Algorithm for Real-Time Embedded Systems.
Bote, Jose Manuel; Recas, Joaquin; Rincon, Francisco; Atienza, David; Hermida, Roman
2018-03-01
This work presents a new modular and low-complexity algorithm for the delineation of the different ECG waves (QRS, P and T peaks, onsets, and end). Involving a reduced number of operations per second and having a small memory footprint, this algorithm is intended to perform real-time delineation on resource-constrained embedded systems. The modular design allows the algorithm to automatically adjust the delineation quality in runtime to a wide range of modes and sampling rates, from a ultralow-power mode when no arrhythmia is detected, in which the ECG is sampled at low frequency, to a complete high-accuracy delineation mode, in which the ECG is sampled at high frequency and all the ECG fiducial points are detected, in the case of arrhythmia. The delineation algorithm has been adjusted using the QT database, providing very high sensitivity and positive predictivity, and validated with the MIT database. The errors in the delineation of all the fiducial points are below the tolerances given by the Common Standards for Electrocardiography Committee in the high-accuracy mode, except for the P wave onset, for which the algorithm is above the agreed tolerances by only a fraction of the sample duration. The computational load for the ultralow-power 8-MHz TI MSP430 series microcontroller ranges from 0.2% to 8.5% according to the mode used.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
NASA Astrophysics Data System (ADS)
Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza
2012-06-01
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.
Decision-making without a brain: how an amoeboid organism solves the two-armed bandit.
Reid, Chris R; MacDonald, Hannelore; Mann, Richard P; Marshall, James A R; Latty, Tanya; Garnier, Simon
2016-06-01
Several recent studies hint at shared patterns in decision-making between taxonomically distant organisms, yet few studies demonstrate and dissect mechanisms of decision-making in simpler organisms. We examine decision-making in the unicellular slime mould Physarum polycephalum using a classical decision problem adapted from human and animal decision-making studies: the two-armed bandit problem. This problem has previously only been used to study organisms with brains, yet here we demonstrate that a brainless unicellular organism compares the relative qualities of multiple options, integrates over repeated samplings to perform well in random environments, and combines information on reward frequency and magnitude in order to make correct and adaptive decisions. We extend our inquiry by using Bayesian model selection to determine the most likely algorithm used by the cell when making decisions. We deduce that this algorithm centres around a tendency to exploit environments in proportion to their reward experienced through past sampling. The algorithm is intermediate in computational complexity between simple, reactionary heuristics and calculation-intensive optimal performance algorithms, yet it has very good relative performance. Our study provides insight into ancestral mechanisms of decision-making and suggests that fundamental principles of decision-making, information processing and even cognition are shared among diverse biological systems. © 2016 The Authors.
Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees
Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng
2015-01-01
In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597
Using Decision Trees for Estimating Mode Choice of Trips in Buca-Izmir
NASA Astrophysics Data System (ADS)
Oral, L. O.; Tecim, V.
2013-05-01
Decision makers develop transportation plans and models for providing sustainable transport systems in urban areas. Mode Choice is one of the stages in transportation modelling. Data mining techniques can discover factors affecting the mode choice. These techniques can be applied with knowledge process approach. In this study a data mining process model is applied to determine the factors affecting the mode choice with decision trees techniques by considering individual trip behaviours from household survey data collected within Izmir Transportation Master Plan. From this perspective transport mode choice problem is solved on a case in district of Buca-Izmir, Turkey with CRISP-DM knowledge process model.
Dijkman, B; Wellens, H J
2001-09-01
The 7250 Jewel AF Medtronic model of ICD is the first implantable device in which both therapies for atrial arrhythmias and pacing algorithms for atrial arrhythmia prevention are available. Feasibility of that extensive atrial arrhythmia management requires correct and synergic functioning of different algorithms to control arrhythmias. The ability of the new pacing algorithms to stabilize the atrial rate following termination of treated atrial arrhythmias was evaluated in the marker channel registration of 600 spontaneously occurring episodes in 15 patients with the Jewel AF. All patients (55+/-15 years) had structural heart disease and documented atrial and ventricular arrhythmias. Dual chamber rate stabilization pacing was present in 245 (41 %) of episodes following arrhythmia termination and was a part of the mode switching operation during which pacing was provided in the dynamic DDI mode. This algorithm could function as the atrial rate stabilization pacing only when there was a slow spontaneous atrial rhythm or in presence of atrial premature beats conducted to the ventricles with a normal AV time. In case of atrial premature beats with delayed or absent conduction to the ventricles and in case of ventricular premature beats, the algorithm stabilized the ventricular rate. The rate stabilization pacing in DDI mode during sinus rhythm following atrial arrhythmia termination was often extended in time due to the device-based definition of arrhythmia termination. This was also the case in patients, in whom the DDD mode with true atrial rate stabilization algorithm was programmed. The rate stabilization algorithms in the Jewel AF applied after atrial arrhythmia termination provide pacing that is not based on the timing of atrial events. Only under certain circumstances the algorithm can function as atrial rate stabilization pacing. Adjustments in availability and functioning of the rate stabilization algorithms might be of benefit for the clinical performance of pacing as part of device therapy for atrial arrhythmias.
NASA Astrophysics Data System (ADS)
Zhao, Zhiguo; Lei, Dan; Chen, Jiayi; Li, Hangyu
2018-05-01
When the four-wheel-drive hybrid electric vehicle (HEV) equipped with a dry dual clutch transmission (DCT) is in the mode transition process from pure electrical rear wheel drive to front wheel drive with engine or hybrid drive, the problem of vehicle longitudinal jerk is prominent. A mode transition robust control algorithm which resists external disturbance and model parameter fluctuation has been developed, by taking full advantage of fast and accurate torque (or speed) response of three electrical power sources and getting the clutch of DCT fully involved in the mode transition process. Firstly, models of key components of driveline system have been established, and the model of five-degrees-of-freedom vehicle longitudinal dynamics has been built by using a Uni-Tire model. Next, a multistage optimal control method has been produced to realize the decision of engine torque and clutch-transmitted torque. The sliding-mode control strategy for measurable disturbance has been proposed at the stage of engine speed dragged up. Meanwhile, the double tracking control architecture that integrates the model calculating feedforward control with H∞ robust feedback control has been presented at the stage of speed synchronization. Finally, the results from Matlab/Simulink software and hardware-in-the-loop test both demonstrate that the proposed control strategy for mode transition can not only coordinate the torque among different power sources and clutch while minimizing vehicle longitudinal jerk, but also provide strong robustness to model uncertainties and external disturbance.
A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers
NASA Astrophysics Data System (ADS)
Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad
Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.
NASA Astrophysics Data System (ADS)
Kaur, Parneet; Singh, Sukhwinder; Garg, Sushil; Harmanpreet
2010-11-01
In this paper we study about classification algorithms for farm DSS. By applying classification algorithms i.e. Limited search, ID3, CHAID, C4.5, Improved C4.5 and One VS all Decision Tree on common data set of crop with specified class, results are obtained. The tool used to derive results is SPINA. The graphical results obtained from tool are compared to suggest best technique to develop farm Decision Support System. This analysis would help to researchers to design effective and fast DSS for farmer to take decision for enhancing their yield.
Gong, Mali; Yuan, Yanyang; Li, Chen; Yan, Ping; Zhang, Haitao; Liao, Suying
2007-03-19
A model based on propagation-rate equations with consideration of transverse gain distribution is built up to describe the transverse mode competition in strongly pumped multimode fiber lasers and amplifiers. An approximate practical numerical algorithm by multilayer method is presented. Based on the model and the numerical algorithm, the behaviors of multitransverse mode competition are demonstrated and individual transverse modes power distributions of output are simulated numerically for both fiber lasers and amplifiers under various conditions.
NASA Astrophysics Data System (ADS)
Bäumer, Richard; Terrill, Richard; Wollnack, Simon; Werner, Herbert; Starossek, Uwe
2018-01-01
The twin rotor damper (TRD), an active mass damper, uses the centrifugal forces of two eccentrically rotating control masses. In the continuous rotation mode, the preferred mode of operation, the two eccentric control masses rotate with a constant angular velocity about two parallel axes, creating, under further operational constraints, a harmonic control force in a single direction. In previous theoretical work, it was shown that this mode of operation is effective for the damping of large, harmonic vibrations of a single degree of freedom (SDOF) oscillator. In this paper, the SDOF oscillator is assumed to be affected by a stochastic excitation force and consequently responds with several frequencies. Therefore, the TRD must deviate from the continuous rotation mode to ensure the anti-phasing between the harmonic control force of the TRD and the velocity of the SDOF oscillator. It is found that the required deviation from the continuous rotation mode increases with lower vibration amplitude. Therefore, an operation of the TRD in the continuous rotation mode is no longer efficient below a specific vibration-amplitude threshold. To additionally dampen vibrations below this threshold, the TRD can switch to another, more energy-consuming mode of operation, the swinging mode in which both control masses oscillate about certain angular positions. A power-efficient control algorithm is presented which uses the continuous rotation mode for large vibrations and the swinging mode for small vibrations. To validate the control algorithm, numerical and experimental investigations are performed for a single degree of freedom oscillator under stochastic excitation. Using both modes of operation, it is shown that the control algorithm is effective for the cases of free and stochastically forced vibrations of arbitrary amplitude.
Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.
Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai
2016-03-01
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Tappen, Ruth M; Elkins, Deborah; Worch, Sarah; Weglinski, MaryAnn
2016-11-01
The purpose of the current study was to characterize the decision-making processes used by nursing home (NH) residents and their families when confronted with an acute change in condition and the choice of transfer to the hospital or treatment in the NH. Using cognitive task analysis, 96 residents and 75 family members from 19 NHs were asked how they would make this choice. Fifty-one residents (53%) and 61 family members (81%) used a deliberative mode characterized by seeking information and weighing risks and benefits. Ten residents (10%) and five family members (7%) used a predominantly emotion-based mode characterized by references to feelings and prior experiences in these facilities. Thirty-six residents (38%) and nine family members (12%) delegated the decision to a family member or provider. Age and resident/family status were associated with mode used; transfer choice, gender, religion, education, and ethnic group were not. Although classic theories of information processing posit two modes of decision making, deliberative and affective, the current data suggest a third mode, that of delegating the decision to trusted others, particularly family members and providers. [Res Gerontol Nurs. 2016; 9(6):288-299.]. Copyright 2016, SLACK Incorporated.
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions. PMID:26284011
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions.
Correlation-coefficient-based fast template matching through partial elimination.
Mahmood, Arif; Khan, Sohaib
2012-04-01
Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.
2005-02-03
Aging Aircraft 2005 The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft Decision Algorithms for Electrical Wiring Interconnect Systems (EWIS...SUBTITLE Aging Aircraft 2005, The 8th Joint NASA /FAA/DOD Conference on Aging Aircraft, Decision algorithms for Electrical Wiring Interconnect...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NASA Langley Research Center, 8W. Taylor St., M/S 190 Hampton, VA 23681 and NAVAIR
Enzyme leaps fuel antichemotaxis
Jee, Ah-Young; Dutta, Sandipan; Cho, Yoon-Kyoung
2018-01-01
There is mounting evidence that enzyme diffusivity is enhanced when the enzyme is catalytically active. Here, using superresolution microscopy [stimulated emission-depletion fluorescence correlation spectroscopy (STED-FCS)], we show that active enzymes migrate spontaneously in the direction of lower substrate concentration (“antichemotaxis”) by a process analogous to the run-and-tumble foraging strategy of swimming microorganisms and our theory quantifies the mechanism. The two enzymes studied, urease and acetylcholinesterase, display two families of transit times through subdiffraction-sized focus spots, a diffusive mode and a ballistic mode, and the latter transit time is close to the inverse rate of catalytic turnover. This biochemical information-processing algorithm may be useful to design synthetic self-propelled swimmers and nanoparticles relevant to active materials. Executed by molecules lacking the decision-making circuitry of microorganisms, antichemotaxis by this run-and-tumble process offers the biological function to homogenize product concentration, which could be significant in situations when the reactant concentration varies from spot to spot. PMID:29255047
Improving family satisfaction and participation in decision making in an intensive care unit.
Huffines, Meredith; Johnson, Karen L; Smitz Naranjo, Linda L; Lissauer, Matthew E; Fishel, Marmie Ann-Michelle; D'Angelo Howes, Susan M; Pannullo, Diane; Ralls, Mindy; Smith, Ruth
2013-10-01
Background Survey data revealed that families of patients in a surgical intensive care unit were not satisfied with their participation in decision making or with how well the multidisciplinary team worked together. Objectives To develop and implement an evidence-based communication algorithm and evaluate its effect in improving satisfaction among patients' families. Methods A multidisciplinary team developed an algorithm that included bundles of communication interventions at 24, 72, and 96 hours after admission to the unit. The algorithm included clinical triggers, which if present escalated the algorithm. A pre-post design using process improvement methods was used to compare families' satisfaction scores before and after implementation of the algorithm. Results Satisfaction scores for participation in decision making (45% vs 68%; z = -2.62, P = .009) and how well the health care team worked together (64% vs 83%; z = -2.10, P = .04) improved significantly after implementation. Conclusions Use of an evidence-based structured communication algorithm may be a way to improve satisfaction of families of intensive care patients with their participation in decision making and their perception of how well the unit's team works together.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
Results are presented from the evaluation of the performance seeking control (PSC) optimization algorithm developed by Smith et al. (1990) for F-15 aircraft, which optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. Comparisons are presented between the load cell measurements, PSC onboard model thrust calculations, and posttest state variable model computations. Actual performance improvements using the PSC algorithm are presented for its various modes. The results of using PSC algorithm are compared with similar test case results using the HIDEC algorithm.
Secret Key Crypto Implementations
NASA Astrophysics Data System (ADS)
Bertoni, Guido Marco; Melzani, Filippo
This chapter presents the algorithm selected in 2001 as the Advanced Encryption Standard. This algorithm is the base for implementing security and privacy based on symmetric key solutions in almost all new applications. Secret key algorithms are used in combination with modes of operation to provide different security properties. The most used modes of operation are presented in this chapter. Finally an overview of the different techniques of software and hardware implementations is given.
Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS
NASA Astrophysics Data System (ADS)
Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan
2018-03-01
As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.
Hollinghurst, Sandra; Emmett, Clare; Peters, Tim J; Watson, Helen; Fahey, Tom; Murphy, Deirdre J; Montgomery, Alan
2010-01-01
Maternal preferences should be considered in decisions about mode of delivery following a previous cesarean, but risks and benefits are unclear. Decision aids can help decision making, although few studies have assessed costs in conjunction with effectiveness. Economic evaluation of 2 decision aids for women with 1 previous cesarean. Cost-consequences analysis. Data sources were self-reported resource use and outcome and published national unit costs. The target population was women with 1 previous cesarean. The time horizon was 37 weeks' gestation and 6 weeks postnatal. The perspective was health care delivery system. The interventions were usual care, usual care plus an information program, and usual care plus a decision analysis program. The outcome measures were costs to the National Health Service (NHS) in the United Kingdom (UK), score on the Decisional Conflict Scale, and mode of delivery. RESULTS OF MAIN ANALYSIS: Cost of delivery represented 84% of the total cost; mode of delivery was the most important determinant of cost differences across the groups. Mean (SD) total cost per mother and baby: 2033 (677) for usual care, 2069 (738) for information program, and 2019 (741) for decision analysis program. Decision aids reduced decisional conflict. Women using the decision analysis program had fewest cesarean deliveries. Applying a cost premium to emergency cesareans over electives had little effect on group comparisons. Conclusions were unaffected. Disparity in timing of outcomes and costs, data completeness, and quality. Decision aids can reduce decisional conflict in women with a previous cesarean section when deciding on mode of delivery. The information program could be implemented at no extra cost to the NHS. The decision analysis program might reduce the rate of cesarean sections without any increase in costs.
Fast GPU-based computation of spatial multigrid multiframe LMEM for PET.
Nassiri, Moulay Ali; Carrier, Jean-François; Després, Philippe
2015-09-01
Significant efforts were invested during the last decade to accelerate PET list-mode reconstructions, notably with GPU devices. However, the computation time per event is still relatively long, and the list-mode efficiency on the GPU is well below the histogram-mode efficiency. Since list-mode data are not arranged in any regular pattern, costly accesses to the GPU global memory can hardly be optimized and geometrical symmetries cannot be used. To overcome obstacles that limit the acceleration of reconstruction from list-mode on the GPU, a multigrid and multiframe approach of an expectation-maximization algorithm was developed. The reconstruction process is started during data acquisition, and calculations are executed concurrently on the GPU and the CPU, while the system matrix is computed on-the-fly. A new convergence criterion also was introduced, which is computationally more efficient on the GPU. The implementation was tested on a Tesla C2050 GPU device for a Gemini GXL PET system geometry. The results show that the proposed algorithm (multigrid and multiframe list-mode expectation-maximization, MGMF-LMEM) converges to the same solution as the LMEM algorithm more than three times faster. The execution time of the MGMF-LMEM algorithm was 1.1 s per million of events on the Tesla C2050 hardware used, for a reconstructed space of 188 x 188 x 57 voxels of 2 x 2 x 3.15 mm3. For 17- and 22-mm simulated hot lesions, the MGMF-LMEM algorithm led on the first iteration to contrast recovery coefficients (CRC) of more than 75 % of the maximum CRC while achieving a minimum in the relative mean square error. Therefore, the MGMF-LMEM algorithm can be used as a one-pass method to perform real-time reconstructions for low-count acquisitions, as in list-mode gated studies. The computation time for one iteration and 60 millions of events was approximately 66 s.
Interventions for supporting pregnant women's decision-making about mode of birth after a caesarean.
Horey, Dell; Kealy, Michelle; Davey, Mary-Ann; Small, Rhonda; Crowther, Caroline A
2013-07-30
Pregnant women who have previously had a caesarean birth and who have no contraindication for vaginal birth after caesarean (VBAC) may need to decide whether to choose between a repeat caesarean birth or to commence labour with the intention of achieving a VBAC. Women need information about their options and interventions designed to support decision-making may be helpful. Decision support interventions can be implemented independently, or shared with health professionals during clinical encounters or used in mediated social encounters with others, such as telephone decision coaching services. Decision support interventions can include decision aids, one-on-one counselling, group information or support sessions and decision protocols or algorithms. This review considers any decision support intervention for pregnant women making birth choices after a previous caesarean birth. To examine the effectiveness of interventions to support decision-making about vaginal birth after a caesarean birth.Secondary objectives are to identify issues related to the acceptability of any interventions to parents and the feasibility of their implementation. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 June 2013), Current Controlled Trials (22 July 2013), the WHO International Clinical Trials Registry Platform Search Portal (ICTRP) (22 July 2013) and reference lists of retrieved articles. We also conducted citation searches of included studies to identify possible concurrent qualitative studies. All published, unpublished, and ongoing randomised controlled trials (RCTs) and quasi-randomised trials with reported data of any intervention designed to support pregnant women who have previously had a caesarean birth make decisions about their options for birth. Studies using a cluster-randomised design were eligible for inclusion but none were identified. Studies using a cross-over design were not eligible for inclusion. Studies published in abstract form only would have been eligible for inclusion if data were able to be extracted. Two review authors independently applied the selection criteria and carried out data extraction and quality assessment of studies. Data were checked for accuracy. We contacted authors of included trials for additional information. All included interventions were classified as independent, shared or mediated decision supports. Consensus was obtained for classifications. Verification of the final list of included studies was undertaken by three review authors. Three randomised controlled trials involving 2270 women from high-income countries were eligible for inclusion in the review. Outcomes were reported for 1280 infants in one study. The interventions assessed in the trials were designed to be used either independently by women or mediated through the involvement of independent support. No studies looked at shared decision supports, that is, interventions designed to facilitate shared decision-making with health professionals during clinical encounters.We found no difference in planned mode of birth: VBAC (risk ratio (RR) 1.03, 95% confidence interval (CI) 0.97 to 1.10; I² = 0%) or caesarean birth (RR 0.96, 95% CI 0.84 to 1.10; I² = 0%). The proportion of women unsure about preference did not change (RR 0.87, 95% CI 0.62 to 1.20; I² = 0%).There was no difference in adverse outcomes reported between intervention and control groups (one trial, 1275 women/1280 babies): permanent (RR 0.66, 95% CI 0.32 to 1.36); severe (RR 1.02, 95% CI 0.77 to 1.36); unclear (0.66, 95% CI 0.27, 1.61). Overall, 64.8% of those indicating preference for VBAC achieved it, while 97.1% of those planning caesarean birth achieved this mode of birth. We found no difference in the proportion of women achieving congruence between preferred and actual mode of birth (RR 1.02, 95% CI 0.96 to 1.07) (three trials, 1921 women).More women had caesarean births (57.3%), including 535 women where it was unplanned (42.6% all caesarean deliveries and 24.4% all births). We found no difference in actual mode of birth between groups, (average RR 0.97, 95% CI 0.89 to 1.06) (three trials, 2190 women).Decisional conflict about preferred mode of birth was lower (less uncertainty) for women with decisional support (standardised mean difference (SMD) -0.25, 95% CI -0.47 to -0.02; two trials, 787 women; I² = 48%). There was also a significant increase in knowledge among women with decision support compared with those in the control group (SMD 0.74, 95% CI 0.46 to 1.03; two trials, 787 women; I² = 65%). However, there was considerable heterogeneity between the two studies contributing to this outcome ( I² = 65%) and attrition was greater than 15 per cent and the evidence for this outcome is considered to be moderate quality only. There was no difference in satisfaction between women with decision support and those without it (SMD 0.06, 95% CI -0.09 to 0.20; two trials, 797 women; I² = 0%). No study assessed decisional regret or whether women's information needs were met.Qualitative data gathered in interviews with women and health professionals provided information about acceptability of the decision support and its feasibility of implementation. While women liked the decision support there was concern among health professionals about their impact on their time and workload. Evidence is limited to independent and mediated decision supports. Research is needed on shared decision support interventions for women considering mode of birth in a pregnancy after a caesarean birth to use with their care providers.
Research on AHP decision algorithms based on BP algorithm
NASA Astrophysics Data System (ADS)
Ma, Ning; Guan, Jianhe
2017-10-01
Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.
Anomaly Detection in Test Equipment via Sliding Mode Observers
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Drakunov, Sergey V.
2012-01-01
Nonlinear observers were originally developed based on the ideas of variable structure control, and for the purpose of detecting disturbances in complex systems. In this anomaly detection application, these observers were designed for estimating the distributed state of fluid flow in a pipe described by a class of advection equations. The observer algorithm uses collected data in a piping system to estimate the distributed system state (pressure and velocity along a pipe containing liquid gas propellant flow) using only boundary measurements. These estimates are then used to further estimate and localize possible anomalies such as leaks or foreign objects, and instrumentation metering problems such as incorrect flow meter orifice plate size. The observer algorithm has the following parts: a mathematical model of the fluid flow, observer control algorithm, and an anomaly identification algorithm. The main functional operation of the algorithm is in creating the sliding mode in the observer system implemented as software. Once the sliding mode starts in the system, the equivalent value of the discontinuous function in sliding mode can be obtained by filtering out the high-frequency chattering component. In control theory, "observers" are dynamic algorithms for the online estimation of the current state of a dynamic system by measurements of an output of the system. Classical linear observers can provide optimal estimates of a system state in case of uncertainty modeled by white noise. For nonlinear cases, the theory of nonlinear observers has been developed and its success is mainly due to the sliding mode approach. Using the mathematical theory of variable structure systems with sliding modes, the observer algorithm is designed in such a way that it steers the output of the model to the output of the system obtained via a variety of sensors, in spite of possible mismatches between the assumed model and actual system. The unique properties of sliding mode control allow not only control of the model internal states to the states of the real-life system, but also identification of the disturbance or anomaly that may occur.
Photovoltaic pumping system - Comparative study analysis between direct and indirect coupling mode
NASA Astrophysics Data System (ADS)
Harrag, Abdelghani; Titraoui, Abdessalem; Bahri, Hamza; Messalti, Sabir
2017-02-01
In this paper, P&O algorithm is used in order to improve the performance of photovoltaic water pumping system in both dynamic and static response. The efficiency of the proposed algorithm has been studied successfully using a DC motor-pump powered using controller by thirty six PV modules via DC-DC boost converter derived by a P&O MPPT algorithm. Comparative study results between the direct and indirect modes coupling confirm that the proposed algorithm can effectively improve simultaneously: accuracy, rapidity, ripple and overshoot.
A Novel Energy Saving Algorithm with Frame Response Delay Constraint in IEEE 802.16e
NASA Astrophysics Data System (ADS)
Nga, Dinh Thi Thuy; Kim, Mingon; Kang, Minho
Sleep-mode operation of a Mobile Subscriber Station (MSS) in IEEE 802.16e effectively saves energy consumption; however, it induces frame response delay. In this letter, we propose an algorithm to quickly find the optimal value of the final sleep interval in sleep-mode in order to minimize energy consumption with respect to a given frame response delay constraint. The validations of our proposed algorithm through analytical results and simulation results suggest that our algorithm provide a potential guidance to energy saving.
The Impact of the Mode of Thought in Complex Decisions: Intuitive Decisions are Better
Usher, Marius; Russo, Zohar; Weyers, Mark; Brauner, Ran; Zakay, Dan
2011-01-01
A number of recent studies have reported that decision quality is enhanced under conditions of inattention or distraction (unconscious thought; Dijksterhuis, 2004; Dijksterhuis and Nordgren, 2006; Dijksterhuis et al., 2006). These reports have generated considerable controversy, for both experimental (problems of replication) and theoretical reasons (interpretation). Here we report the results of four experiments. The first experiment replicates the unconscious thought effect, under conditions that validate and control the subjective criterion of decision quality. The second and third experiments examine the impact of a mode of thought manipulation (without distraction) on decision quality in immediate decisions. Here we find that intuitive or affective manipulations improve decision quality compared to analytic/deliberation manipulations. The fourth experiment combines the two methods (distraction and mode of thought manipulations) and demonstrates enhanced decision quality, in a situation that attempts to preserve ecological validity. The results are interpreted within a framework that is based on two interacting subsystems of decision-making: an affective/intuition based system and an analytic/deliberation system. PMID:21716605
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Automated Decision-Making and Big Data: Concerns for People With Mental Illness.
Monteith, Scott; Glenn, Tasha
2016-12-01
Automated decision-making by computer algorithms based on data from our behaviors is fundamental to the digital economy. Automated decisions impact everyone, occurring routinely in education, employment, health care, credit, and government services. Technologies that generate tracking data, including smartphones, credit cards, websites, social media, and sensors, offer unprecedented benefits. However, people are vulnerable to errors and biases in the underlying data and algorithms, especially those with mental illness. Algorithms based on big data from seemingly unrelated sources may create obstacles to community integration. Voluntary online self-disclosure and constant tracking blur traditional concepts of public versus private data, medical versus non-medical data, and human versus automated decision-making. In contrast to sharing sensitive information with a physician in a confidential relationship, there may be numerous readers of information revealed online; data may be sold repeatedly; used in proprietary algorithms; and are effectively permanent. Technological changes challenge traditional norms affecting privacy and decision-making, and continued discussions on new approaches to provide privacy protections are needed.
Modal split model considering carpool mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyles, R.W.
1979-03-01
Modal split remains a primary concern of transportation planners as the state-of-the art has developed from diversion curves to behavioral models. The approach taken here is to formulate the mode-choice decision for the work trip as a linear combination of real and perceived characteristics of the modes considered. The logit formulation is used with three modes being considered: two automobile modes (drive-alone and carpool) and a public transit mode (bus). The final model provides insight into which factors are important in travel decisions among these three modes and the importance of examining traveler's perceptions of the differences among modes relativemore » to actual measurable differences.« less
McKinney, Mark C; Riley, Jeffrey B
2007-12-01
The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time < 450 seconds with > 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.
A new approach to enhance the performance of decision tree for classifying gene expression data.
Hassan, Md; Kotagiri, Ramamohanarao
2013-12-20
Gene expression data classification is a challenging task due to the large dimensionality and very small number of samples. Decision tree is one of the popular machine learning approaches to address such classification problems. However, the existing decision tree algorithms use a single gene feature at each node to split the data into its child nodes and hence might suffer from poor performance specially when classifying gene expression dataset. By using a new decision tree algorithm where, each node of the tree consists of more than one gene, we enhance the classification performance of traditional decision tree classifiers. Our method selects suitable genes that are combined using a linear function to form a derived composite feature. To determine the structure of the tree we use the area under the Receiver Operating Characteristics curve (AUC). Experimental analysis demonstrates higher classification accuracy using the new decision tree compared to the other existing decision trees in literature. We experimentally compare the effect of our scheme against other well known decision tree techniques. Experiments show that our algorithm can substantially boost the classification performance of the decision tree.
NASA Astrophysics Data System (ADS)
Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian
2018-03-01
This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.
A Fuzzy-Decision Based Approach for Composite Event Detection in Wireless Sensor Networks
Zhang, Shukui; Chen, Hao; Zhu, Qiaoming
2014-01-01
The event detection is one of the fundamental researches in wireless sensor networks (WSNs). Due to the consideration of various properties that reflect events status, the Composite event is more consistent with the objective world. Thus, the research of the Composite event becomes more realistic. In this paper, we analyze the characteristics of the Composite event; then we propose a criterion to determine the area of the Composite event and put forward a dominating set based network topology construction algorithm under random deployment. For the unreliability of partial data in detection process and fuzziness of the event definitions in nature, we propose a cluster-based two-dimensional τ-GAS algorithm and fuzzy-decision based composite event decision mechanism. In the case that the sensory data of most nodes are normal, the two-dimensional τ-GAS algorithm can filter the fault node data effectively and reduce the influence of erroneous data on the event determination. The Composite event judgment mechanism which is based on fuzzy-decision holds the superiority of the fuzzy-logic based algorithm; moreover, it does not need the support of a huge rule base and its computational complexity is small. Compared to CollECT algorithm and CDS algorithm, this algorithm improves the detection accuracy and reduces the traffic. PMID:25136690
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool
NASA Technical Reports Server (NTRS)
Bloem, Michael; Hattaway, David; Bambos, Nicholas
2012-01-01
Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.
Development of a Two-Wheel Contingency Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank H. (Technical Monitor)
2002-01-01
In the event of a failure of one of MAP's three reaction wheel assemblies (RWAs), it is not possible to achieve three-axis, full-state attitude control using the remaining two wheels. Hence, two of the attitude control algorithms implemented on the MAP spacecraft will no longer be usable in their current forms: Inertial Mode, used for slewing to and holding inertial attitudes, and Observing Mode, which implements the nominal dual-spin science mode. This paper describes the effort to create a complete strategy for using software algorithms to cope with a RWA failure. The discussion of the design process will be divided into three main subtopics: performing orbit maneuvers to reach and maintain an orbit about the second Earth-Sun libration point in the event of a RWA failure, completing the mission using a momentum-bias two-wheel science mode, and developing a new thruster-based mode for adjusting the inertially fixed momentum bias. In this summary, the philosophies used in designing these changes is shown; the full paper will supplement these with algorithm descriptions and testing results.
V2.2 L2AS Detailed Release Description April 15, 2002
Atmospheric Science Data Center
2013-03-14
... 'optically thick atmosphere' algorithm. Implement new experimental aerosol retrieval algorithm over homogeneous surface types. ... Change values: cloud_mask_decision_matrix(1,1): .true. -> .false. cloud_mask_decision_matrix(2,1): .true. -> .false. ...
Pricing decisions from experience: the roles of information-acquisition and response modes.
Golan, Hagai; Ert, Eyal
2015-03-01
While pricing decisions that are based on experience are quite common, e.g., setting a selling price for a used car, this type of decision has been surprisingly overlooked in psychology and decision research. Previous studies have focused on either choice decisions from experience, or pricing decisions from description. Those studies revealed that pricing involves cognitive mechanisms other than choice, while experience-based decisions involve mechanisms that differ from description-based ones. Thus, the mutual effect of pricing and experience on decision-making remains unclear. To test this effect, we experimentally compared real-money pricing decisions from experience with those from description, and with choices from experience. The results show that the mode of acquiring information affects pricing: the tendency to underprice high-probability prospects and overprice low-probability ones is diminished when pricing is based on experience rather than description. The findings further reveal attenuation of the tendency to underweight rare events, which underlies choices from experience, in pricing decisions from experience. The difference occurs because the response mode affects the search effort and decision strategy in decisions from experience. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Toward detecting deception in intelligent systems
NASA Astrophysics Data System (ADS)
Santos, Eugene, Jr.; Johnson, Gregory, Jr.
2004-08-01
Contemporary decision makers often must choose a course of action using knowledge from several sources. Knowledge may be provided from many diverse sources including electronic sources such as knowledge-based diagnostic or decision support systems or through data mining techniques. As the decision maker becomes more dependent on these electronic information sources, detecting deceptive information from these sources becomes vital to making a correct, or at least more informed, decision. This applies to unintentional disinformation as well as intentional misinformation. Our ongoing research focuses on employing models of deception and deception detection from the fields of psychology and cognitive science to these systems as well as implementing deception detection algorithms for probabilistic intelligent systems. The deception detection algorithms are used to detect, classify and correct attempts at deception. Algorithms for detecting unexpected information rely upon a prediction algorithm from the collaborative filtering domain to predict agent responses in a multi-agent system.
Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis
2012-05-01
In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.
Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure
Park, Wookje; Jung, Sikhang
2014-01-01
Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508
Learners' choices and beliefs about self-testing.
Kornell, Nate; Son, Lisa K
2009-07-01
Students have to make scores of practical decisions when they study. We investigated the effectiveness of, and beliefs underlying, one such practical decision: the decision to test oneself while studying. Using a flashcards-like procedure, participants studied lists of word pairs. On the second of two study trials, participants either saw the entire pair again (pair mode) or saw the cue and attempted to generate the target (test mode). Participants were asked either to rate the effectiveness of each study mode (Experiment 1) or to choose between the two modes (Experiment 2). The results demonstrated a mismatch between metacognitive beliefs and study choices: Participants (incorrectly) judged that the pair mode resulted in the most learning, but chose the test mode most frequently. A post-experimental questionnaire suggested that self-testing was motivated by a desire to diagnose learning rather than a desire to improve learning.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
Larsen, Louise Pape; Biering, Karin; Johnsen, Soren Paaske; Riiskjær, Erik; Schougaard, Liv Marit
2014-01-01
Background Patient-reported outcome (PRO) measures may be used at a group level for research and quality improvement and at the individual patient level to support clinical decision making and ensure efficient use of resources. The challenges involved in implementing PRO measures are mostly the same regardless of aims and diagnostic groups and include logistic feasibility, high response rates, robustness, and ability to adapt to the needs of patient groups and settings. If generic PRO systems can adapt to specific needs, advanced technology can be shared between medical specialties and for different aims. Objective We describe methodological, organizational, and practical experiences with a generic PRO system, WestChronic, which is in use among a range of diagnostic groups and for a range of purposes. Methods The WestChronic system supports PRO data collection, with integration of Web and paper PRO questionnaires (mixed-mode) and automated procedures that enable adherence to implementation-specific schedules for the collection of PRO. For analysis, we divided functionalities into four elements: basic PRO data collection and logistics, PRO-based clinical decision support, PRO-based automated decision algorithms, and other forms of communication. While the first element is ubiquitous, the others are optional and only applicable at a patient level. Methodological and organizational experiences were described according to each element. Results WestChronic has, to date, been implemented in 22 PRO projects within 18 diagnostic groups, including cardiology, neurology, rheumatology, nephrology, orthopedic surgery, gynecology, oncology, and psychiatry. The aims of the individual projects included epidemiological research, quality improvement, hospital evaluation, clinical decision support, efficient use of outpatient clinic resources, and screening for side effects and comorbidity. In total 30,174 patients have been included, and 59,232 PRO assessments have been collected using 92 different PRO questionnaires. Response rates of up to 93% were achieved for first-round questionnaires and up to 99% during follow-up. For 6 diagnostic groups, PRO data were displayed graphically to the clinician to facilitate flagging of important symptoms and decision support, and in 5 diagnostic groups PRO data were used for automatic algorithm-based decisions. Conclusions WestChronic has allowed the implementation of all proposed protocol for data collection and processing. The system has achieved high response rates, and longitudinal attrition is limited. The relevance of the questions, the mixed-mode principle, and automated procedures has contributed to the high response rates. Furthermore, development and implementation of a number of approaches and methods for clinical use of PRO has been possible without challenging the generic property. Generic multipurpose PRO systems may enable sharing of automated and efficient logistics, optimal response rates, and other advanced options for PRO data collection and processing, while still allowing adaptation to specific aims and patient groups. PMID:24518281
Automated modal parameter estimation using correlation analysis and bootstrap sampling
NASA Astrophysics Data System (ADS)
Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.
2018-02-01
The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.
Improving serum calcium test ordering according to a decision algorithm.
Faria, Daniel K; Taniguchi, Leandro U; Fonseca, Luiz A M; Ferreira-Junior, Mario; Aguiar, Francisco J B; Lichtenstein, Arnaldo; Sumita, Nairo M; Duarte, Alberto J S; Sales, Maria M
2018-05-18
To detect differences in the pattern of serum calcium tests ordering before and after the implementation of a decision algorithm. We studied patients admitted to an internal medicine ward of a university hospital on April 2013 and April 2016. Patients were classified as critical or non-critical on the day when each test was performed. Adequacy of ordering was defined according to adherence to a decision algorithm implemented in 2014. Total and ionised calcium tests per patient-day of hospitalisation significantly decreased after the algorithm implementation; and duplication of tests (total and ionised calcium measured in the same blood sample) was reduced by 49%. Overall adequacy of ionised calcium determinations increased by 23% (P=0.0001) due to the increase in the adequacy of ionised calcium ordering in non-critical conditions. A decision algorithm can be a useful educational tool to improve adequacy of the process of ordering serum calcium tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Software fault-tolerance by design diversity DEDIX: A tool for experiments
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.
PAM-4 delivery based on pre-distortion and CMMA equalization in a ROF system at 40 GHz
NASA Astrophysics Data System (ADS)
Zhou, Wen; Zhang, Jiao; Han, Xifeng; Kong, Miao; Gou, Pengqi
2018-06-01
In this paper, we proposed a PAM-4 delivery in a ROF system at 40-GHz. PAM-4 transmission data can be generated via look-up table (LUT) pre-distortion, then delivered over 25km single-mode fiber and 0.5m wireless link. At the receiver side, the received signal can be processed with cascaded multi-module algorithm (CMMA) equalization to improve the decision precision. Our measured results show that 10Gbaud PAM-4 transmission in a ROF system at 40-GHz can be achieved with BER of 1.6 × 10-3. To our knowledge, this is the first time to introduce LUT pre-distortion and CMMA equalization in a ROF system to improve signal performance.
Analysis of modal behavior at frequency cross-over
NASA Astrophysics Data System (ADS)
Costa, Robert N., Jr.
1994-11-01
The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.
NASA Astrophysics Data System (ADS)
Muslim, M. A.; Herowati, A. J.; Sugiharti, E.; Prasetiyo, B.
2018-03-01
A technique to dig valuable information buried or hidden in data collection which is so big to be found an interesting patterns that was previously unknown is called data mining. Data mining has been applied in the healthcare industry. One technique used data mining is classification. The decision tree included in the classification of data mining and algorithm developed by decision tree is C4.5 algorithm. A classifier is designed using applying pessimistic pruning in C4.5 algorithm in diagnosing chronic kidney disease. Pessimistic pruning use to identify and remove branches that are not needed, this is done to avoid overfitting the decision tree generated by the C4.5 algorithm. In this paper, the result obtained using these classifiers are presented and discussed. Using pessimistic pruning shows increase accuracy of C4.5 algorithm of 1.5% from 95% to 96.5% in diagnosing of chronic kidney disease.
Mode Shape Estimation Algorithms Under Ambient Conditions: A Comparative Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dosiek, Luke; Zhou, Ning; Pierre, John W.
Abstract—This paper provides a comparative review of five existing ambient electromechanical mode shape estimation algorithms, i.e., the Transfer Function (TF), Spectral, Frequency Domain Decomposition (FDD), Channel Matching, and Subspace Methods. It is also shown that the TF Method is a general approach to estimating mode shape and that the Spectral, FDD, and Channel Matching Methods are actually special cases of it. Additionally, some of the variations of the Subspace Method are reviewed and the Numerical algorithm for Subspace State Space System IDentification (N4SID) is implemented. The five algorithms are then compared using data simulated from a 17-machine model of themore » Western Electricity Coordinating Council (WECC) under ambient conditions with both low and high damping, as well as during the case where ambient data is disrupted by an oscillatory ringdown. The performance of the algorithms is compared using the statistics from Monte Carlo Simulations and results from measured WECC data, and a discussion of the practical issues surrounding their implementation, including cases where power system probing is an option, is provided. The paper concludes with some recommendations as to the appropriate use of the various techniques. Index Terms—Electromechanical mode shape, small-signal stability, phasor measurement units (PMU), system identification, N4SID, subspace.« less
Electromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Lesselier, Dominique
2009-11-01
We propose a non-iterative MUSIC (MUltiple SIgnal Classification)-type algorithm for the time-harmonic electromagnetic imaging of one or more perfectly conducting, arc-like cracks found within a homogeneous space R2. The algorithm is based on a factorization of the Multi-Static Response (MSR) matrix collected in the far-field at a single, nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition), followed by the calculation of a MUSIC cost functional expected to exhibit peaks along the crack curves each half a wavelength. Numerical experimentation from exact, noiseless and noisy data shows that this is indeed the case and that the proposed algorithm behaves in robust manner, with better results in the TM mode than in the TE mode for which one would have to estimate the normal to the crack to get the most optimal results.
Concurrent approach for evolving compact decision rule sets
NASA Astrophysics Data System (ADS)
Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.
1999-02-01
The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.
NASA Technical Reports Server (NTRS)
Kitzis, J. L.; Kitzis, S. N.
1979-01-01
The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
NASA Astrophysics Data System (ADS)
Becker, Matthew Rand
I present a new algorithm, CALCLENS, for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift- dependent shear signals including corrections to the Born approximation by using multiple- plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy ( ≲ 1%) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
NASA Astrophysics Data System (ADS)
Becker, Matthew R.
2013-10-01
I present a new algorithm, Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS (CALCLENS), for efficiently computing weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. This new algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multigrid methods. As a result, large areas of sky (˜10 000 square degrees) can be ray traced efficiently at high resolution using only a few hundred cores. Using this new algorithm and curved-sky calculations that only use a slower but more accurate spherical harmonic transform Poisson solver, I study the convergence, shear E-mode, shear B-mode and rotation mode power spectra. Employing full-sky E/B-mode decompositions, I confirm that the numerically computed shear B-mode and rotation mode power spectra are equal at high accuracy (≲1 per cent) as expected from perturbation theory up to second order. Coupled with realistic galaxy populations placed in large N-body light cone simulations, this new algorithm is ideally suited for the construction of synthetic weak lensing shear catalogues to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys. The implementation presented in this work, written in C and employing widely available software libraries to maintain portability, is publicly available at http://code.google.com/p/calclens.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Water flow algorithm decision support tool for travelling salesman problem
NASA Astrophysics Data System (ADS)
Kamarudin, Anis Aklima; Othman, Zulaiha Ali; Sarim, Hafiz Mohd
2016-08-01
This paper discuss about the role of Decision Support Tool in Travelling Salesman Problem (TSP) for helping the researchers who doing research in same area will get the better result from the proposed algorithm. A study has been conducted and Rapid Application Development (RAD) model has been use as a methodology which includes requirement planning, user design, construction and cutover. Water Flow Algorithm (WFA) with initialization technique improvement is used as the proposed algorithm in this study for evaluating effectiveness against TSP cases. For DST evaluation will go through usability testing conducted on system use, quality of information, quality of interface and overall satisfaction. Evaluation is needed for determine whether this tool can assists user in making a decision to solve TSP problems with the proposed algorithm or not. Some statistical result shown the ability of this tool in term of helping researchers to conduct the experiments on the WFA with improvements TSP initialization.
A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.
D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer
2018-06-27
Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.
Shorten, Allison; Shorten, Brett
2014-10-01
To help identify the optimal timing for provision of pregnancy decision-aids, this paper examines temporal patterns in women's preference for mode of birth after previous cesarean, prior to a decision-aid intervention. Pregnant women (n=212) with one prior cesarean responded to surveys regarding their preference for elective repeat cesarean delivery (ERCD) or trial of labor (TOL) at 12-18 weeks and again at 28 weeks gestation. Patterns of adherence or change in preference were examined. Women's preferences for birth were not set in early pregnancy. There was evidence of increasing uncertainty about preferred mode of birth during the first two trimesters of pregnancy (McNemar value=4.41, p=0.04), decrease in preference for TOL (McNemar value=3.79, p=0.05) and stability in preference for ERCD (McNemar value=0.31, p=0.58). Adherence to early pregnancy choice was associated with previous birth experience, maternal country of birth, emotional state and hospital site. Women's growing uncertainty about mode of birth prior to 28 weeks indicates potential readiness for a decision-aid earlier in pregnancy. Pregnancy decision-aids affecting mode of birth could be provided early in pregnancy to increase women's opportunity to improve knowledge, clarify personal values and reduce decision uncertainty. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
Fischer, Peter; Fischer, Julia; Weisweiler, Silke; Frey, Dieter
2010-12-01
We investigated whether different modes of decision making (deliberate, intuitive, distracted) affect subsequent confirmatory processing of decision-consistent and inconsistent information. Participants showed higher levels of confirmatory information processing when they made a deliberate or an intuitive decision versus a decision under distraction (Studies 1 and 2). As soon as participants have a cognitive (i.e., deliberate cognitive analysis) or affective (i.e., intuitive and gut feeling) reason for their decision, the subjective confidence in the validity of their decision increases, which results in increased levels of confirmatory information processing (Study 2). In contrast, when participants are distracted during decision making, they are less certain about the validity of their decision and thus are subsequently more balanced in the processing of decision-relevant information.
Jia, Zhensheng; Chien, Hung-Chang; Cai, Yi; Yu, Jianjun; Zhang, Chengliang; Li, Junjie; Ma, Yiran; Shang, Dongdong; Zhang, Qi; Shi, Sheping; Wang, Huitao
2015-02-09
We experimentally demonstrate a quad-carrier 1-Tb/s solution with 37.5-GBaud PM-16QAM signal over 37.5-GHz optical grid at 6.7 b/s/Hz net spectral efficiency. Digital Nyquist pulse shaping at the transmitter and post-equalization at the receiver are employed to mitigate the impairments of joint inter-symbol-interference (ISI) and inter-channel-interference (ICI) symbol degradation. The post-equalization algorithms consist of one sample/symbol based decision-directed least mean square (DD-LMS) adaptive filter, digital post filter and maximum likelihood sequence estimation (MLSE), and a positive iterative process among them. By combining these algorithms, the improvement as much as 4-dB OSNR (0.1nm) at SD-FEC limit (Q(2) = 6.25 corresponding to BER = 2.0e-2) is obtained when compared to no such post-equalization process, and transmission over 820-km EDFA-only standard single-mode fiber (SSMF) link is achieved for two 1.2-Tb/s signals with the averaged Q(2) factor larger than 6.5 dB for all sub-channels. Additionally, 50-GBaud 16QAM operating at 1.28 samples/symbol in a DAC is also investigated and successful transmission over 410-km SSMF link is achieved at 62.5-GHz optical grid.
Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1992-01-01
The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.
Implementation of Data Mining to Analyze Drug Cases Using C4.5 Decision Tree
NASA Astrophysics Data System (ADS)
Wahyuni, Sri
2018-03-01
Data mining was the process of finding useful information from a large set of databases. One of the existing techniques in data mining was classification. The method used was decision tree method and algorithm used was C4.5 algorithm. The decision tree method was a method that transformed a very large fact into a decision tree which was presenting the rules. Decision tree method was useful for exploring data, as well as finding a hidden relationship between a number of potential input variables with a target variable. The decision tree of the C4.5 algorithm was constructed with several stages including the selection of attributes as roots, created a branch for each value and divided the case into the branch. These stages would be repeated for each branch until all the cases on the branch had the same class. From the solution of the decision tree there would be some rules of a case. In this case the researcher classified the data of prisoners at Labuhan Deli prison to know the factors of detainees committing criminal acts of drugs. By applying this C4.5 algorithm, then the knowledge was obtained as information to minimize the criminal acts of drugs. From the findings of the research, it was found that the most influential factor of the detainee committed the criminal act of drugs was from the address variable.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
Montgomery, Alan A; Emmett, Clare L; Fahey, Tom; Jones, Claire; Ricketts, Ian; Patel, Roshni R; Peters, Tim J; Murphy, Deirdre J
2007-06-23
To determine the effects of two computer based decision aids on decisional conflict and mode of delivery among pregnant women with a previous caesarean section. Randomised trial, conducted from May 2004 to August 2006. Four maternity units in south west England, and Scotland. 742 pregnant women with one previous lower segment caesarean section and delivery expected at >or=37 weeks. Non-English speakers were excluded. Usual care: standard care given by obstetric and midwifery staff. Information programme: women navigated through descriptions and probabilities of clinical outcomes for mother and baby associated with planned vaginal birth, elective caesarean section, and emergency caesarean section. Decision analysis: mode of delivery was recommended based on utility assessments performed by the woman combined with probabilities of clinical outcomes within a concealed decision tree. Both interventions were delivered via a laptop computer after brief instructions from a researcher. Total score on decisional conflict scale, and mode of delivery. Women in the information programme (adjusted difference -6.2, 95% confidence interval -8.7 to -3.7) and the decision analysis (-4.0, -6.5 to -1.5) groups had reduced decisional conflict compared with women in the usual care group. The rate of vaginal birth was higher for women in the decision analysis group compared with the usual care group (37% v 30%, adjusted odds ratio 1.42, 0.94 to 2.14), but the rates were similar in the information programme and usual care groups. Decision aids can help women who have had a previous caesarean section to decide on mode of delivery in a subsequent pregnancy. The decision analysis approach might substantially affect national rates of caesarean section. Trial Registration Current Controlled Trials ISRCTN84367722.
Frequency-domain-independent vector analysis for mode-division multiplexed transmission
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Hu, Guijun; Li, Jiao
2018-04-01
In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.
Suner, Aslı; Karakülah, Gökhan; Dicle, Oğuz
2014-01-01
Statistical hypothesis testing is an essential component of biological and medical studies for making inferences and estimations from the collected data in the study; however, the misuse of statistical tests is widely common. In order to prevent possible errors in convenient statistical test selection, it is currently possible to consult available test selection algorithms developed for various purposes. However, the lack of an algorithm presenting the most common statistical tests used in biomedical research in a single flowchart causes several problems such as shifting users among the algorithms, poor decision support in test selection and lack of satisfaction of potential users. Herein, we demonstrated a unified flowchart; covers mostly used statistical tests in biomedical domain, to provide decision aid to non-statistician users while choosing the appropriate statistical test for testing their hypothesis. We also discuss some of the findings while we are integrating the flowcharts into each other to develop a single but more comprehensive decision algorithm.
NASA Astrophysics Data System (ADS)
Kang, Sang-Won; Suh, Tae-Suk; Chung, Jin-Beom; Eom, Keun-Yong; Song, Changhoon; Kim, In-Ah; Kim, Jae-Sung; Lee, Jeong-Woo; Cho, Woong
2017-02-01
The purpose of this study was to evaluate the impact of dosimetric and radiobiological parameters on treatment plans by using different dose-calculation algorithms and delivery-beam modes for prostate stereotactic body radiation therapy using an endorectal balloon. For 20 patients with prostate cancer, stereotactic body radiation therapy (SBRT) plans were generated by using a 10-MV photon beam with flattening filter (FF) and flattening-filter-free (FFF) modes. The total treatment dose prescribed was 42.7 Gy in 7 fractions to cover at least 95% of the planning target volume (PTV) with 95% of the prescribed dose. The dose computation was initially performed using an anisotropic analytical algorithm (AAA) in the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) and was then re-calculated using Acuros XB (AXB V. 11.0.34) with the same monitor units and multileaf collimator files. The dosimetric and the radiobiological parameters for the PTV and organs at risk (OARs) were analyzed from the dose-volume histogram. An obvious difference in dosimetric parameters between the AAA and the AXB plans was observed in the PTV and rectum. Doses to the PTV, excluding the maximum dose, were always higher in the AAA plans than in the AXB plans. However, doses to the other OARs were similar in both algorithm plans. In addition, no difference was observed in the dosimetric parameters for different delivery-beam modes when using the same algorithm to generate plans. As a result of the dosimetric parameters, the radiobiological parameters for the two algorithm plans presented an apparent difference in the PTV and the rectum. The average tumor control probability of the AAA plans was higher than that of the AXB plans. The average normal tissue complication probability (NTCP) to rectum was lower in the AXB plans than in the AAA plans. The AAA and the AXB plans yielded very similar NTCPs for the other OARs. In plans using the same algorithms, the NTCPs for delivery-beam modes showed no differences. This study demonstrated that the dosimetric and the radiobiological parameters for the PTV and the rectum affected the dose-calculation algorithms for prostate SBRT using an endorectal balloon. However, the dosimetric and the radiobiological parameters in the AAA and the AXB plans for other OARs were similar. Furthermore, difference between the dosimetric and the radiobiological parameters for different delivery-beam modes were not found when the same algorithm was used to generate the treatment plan.
Gu, Chunyi; Zhu, Xinli; Ding, Yan; Setterberg Simone; Wang, Xiaojiao; Tao, Hua; Zhang, Yu
2018-07-01
To explore nulliparous women's perceptions of decision making regarding mode of delivery under China's two-child policy. Qualitative descriptive design with in-depth semi-structured interviews. Postnatal wards at a tertiary specialized women's hospital in Shanghai, China. 21 nulliparous women 2-3 days postpartum were purposively sampled until data saturation. In-depth semi-structured interviews were conducted between October 8th, 2015 and January 31st, 2016. Two overarching descriptive categories were identified: (1) women's decision-making process: stability versus variability, and (2) factors affecting decision making: variety versus interactivity. Four key themes emerged from each category: (1) initial decision making with certainty: anticipated trial of labour, failed trial of labour, 'shy away' and compromise, anticipated caesarean delivery; (2) initial decision making with uncertainty: anticipated trial of labour, failed trial of labour, 'shy away' and compromise; (3) internal factors affecting decision making: knowledge and attitude, and childbirth self-efficacy; and (4) external factors affecting decision making: social support, and the situational environment. At the initial period of China's two-child policy, nulliparous women have perceived their decision-making process regarding mode of delivery as one with complexity and uncertainty, influenced by both internal and external factors. This may have implications for the obstetric setting to develop a well-designed decision support system for pregnant women during the entire pregnancy periods. And it is recommended that care providers should assess women's preferences for mode of delivery from early pregnancy and provide adequate perinatal support and continuity of care for them. Copyright © 2018 Elsevier Ltd. All rights reserved.
Intermediate Levels of Autonomy within the SSM/PMAD Breadboard
NASA Technical Reports Server (NTRS)
Dugal-Whitehead, Norma R.; Walls, Bryan
1995-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) bread-board is a test bed for the development of advanced power system control and automation. Software control in the SSM/PMAD breadboard is through co-operating systems, called Autonomous Agents. Agents can be a mixture of algorithmic software and expert systems. The early SSM/PMAD system was envisioned as being completely autonomous. It soon became apparent, though, that there would always be a need for human intervention, at least as long as a human interacts with the system in any way. In a system designed only for autonomous operation, manual intervention meant taking full control of the whole system, and loosing whatever expertise was in the system. Several methods for allowing humans to interact at an appropriate level of control were developed. This paper examines some of these intermediate modes of autonomy. The least humanly intrusive mode is simple monitoring. The ability to modify future behavior by altering a schedule involves high-level interaction. Modification of operating activities comes next. The coarsest mode of control is individual, unplanned operation of individual Power System components. Each of these levels is integrated into the SSM/PMAD breadboard, with support for the user (such as warnings of the consequences of control decisions) at every level.
Manticore and CS mode : parallelizable encryption with joint cipher-state authentication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2004-10-01
We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof ofmore » security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.« less
NASA Astrophysics Data System (ADS)
Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan
2017-02-01
Sonic Infrared imaging (SIR) technology is a relatively new NDE technique that has received significant acceptance in the NDE community. SIR NDE is a super-fast, wide range NDE method. The technology uses short pulses of ultrasonic excitation together with infrared imaging to detect defects in the structures under inspection. Defects become visible to the IR camera when the temperature in the crack vicinity increases due to various heating mechanisms in the specimen. Defect detection is highly affected by noise levels as well as mode patterns in the image. Mode patterns result from the superposition of sonic waves interfering within the specimen during the application of sound pulse. Mode patterns can be a serious concern, especially in composite structures. Mode patterns can either mimic real defects in the specimen, or alternatively, hide defects if they overlap. In last year's QNDE, we have presented algorithms to improve defects detectability in severe noise. In this paper, we will present our development of algorithms on defect extraction targeting specifically to mode patterns in SIR images.
NASA Astrophysics Data System (ADS)
Shi, Binkai; Qiao, Pizhong
2018-03-01
Vibration-based nondestructive testing is an area of growing interest and worthy of exploring new and innovative approaches. The displacement mode shape is often chosen to identify damage due to its local detailed characteristic and less sensitivity to surrounding noise. Requirement for baseline mode shape in most vibration-based damage identification limits application of such a strategy. In this study, a new surface fractal dimension called edge perimeter dimension (EPD) is formulated, from which an EPD-based window dimension locus (EPD-WDL) algorithm for irregularity or damage identification of plate-type structures is established. An analytical notch-type damage model of simply-supported plates is proposed to evaluate notch effect on plate vibration performance; while a sub-domain of notch cases with less effect is selected to investigate robustness of the proposed damage identification algorithm. Then, fundamental aspects of EPD-WDL algorithm in term of notch localization, notch quantification, and noise immunity are assessed. A mathematical solution called isomorphism is implemented to remove false peaks caused by inflexions of mode shapes when applying the EPD-WDL algorithm to higher mode shapes. The effectiveness and practicability of the EPD-WDL algorithm are demonstrated by an experimental procedure on damage identification of an artificially-induced notched aluminum cantilever plate using a measurement system of piezoelectric lead-zirconate (PZT) actuator and scanning laser Doppler vibrometer (SLDV). As demonstrated in both the analytical and experimental evaluations, the new surface fractal dimension technique developed is capable of effectively identifying damage in plate-type structures.
Data mining for multiagent rules, strategies, and fuzzy decision tree structure
NASA Astrophysics Data System (ADS)
Smith, James F., III; Rhyne, Robert D., II; Fisher, Kristin
2002-03-01
A fuzzy logic based resource manager (RM) has been developed that automatically allocates electronic attack resources in real-time over many dissimilar platforms. Two different data mining algorithms have been developed to determine rules, strategies, and fuzzy decision tree structure. The first data mining algorithm uses a genetic algorithm as a data mining function and is called from an electronic game. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge. It calls a data mining function, a genetic algorithm, for data mining of the database as required and allows easy evaluation of the information mined in the second step. The criterion for re- optimization is discussed as well as experimental results. Then a second data mining algorithm that uses a genetic program as a data mining function is introduced to automatically discover fuzzy decision tree structures. Finally, a fuzzy decision tree generated through this process is discussed.
Creating ensembles of oblique decision trees with evolutionary algorithms and sampling
Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA
2006-06-13
A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
NASA Astrophysics Data System (ADS)
Azzali, F.; Ghazali, O.; Omar, M. H.
2017-08-01
The design of next generation networks in various technologies under the “Anywhere, Anytime” paradigm offers seamless connectivity across different coverage. A conventional algorithm such as RSSThreshold algorithm, that only uses the received strength signal (RSS) as a metric, will decrease handover performance regarding handover latency, delay, packet loss, and handover failure probability. Moreover, the RSS-based algorithm is only suitable for horizontal handover decision to examine the quality of service (QoS) compared to the vertical handover decision in advanced technologies. In the next generation network, vertical handover can be started based on the user’s convenience or choice rather than connectivity reasons. This study proposes a vertical handover decision algorithm that uses a Fuzzy Logic (FL) algorithm, to increase QoS performance in heterogeneous vehicular ad-hoc networks (VANET). The study uses network simulator 2.29 (NS 2.29) along with the mobility traffic network and generator to implement simulation scenarios and topologies. This helps the simulation to achieve a realistic VANET mobility scenario. The required analysis on the performance of QoS in the vertical handover can thus be conducted. The proposed Fuzzy Logic algorithm shows improvement over the conventional algorithm (RSSThreshold) in the average percentage of handover QoS whereby it achieves 20%, 21% and 13% improvement on handover latency, delay, and packet loss respectively. This is achieved through triggering a process in layer two and three that enhances the handover performance.
Effect of feedback mode and task difficulty on quality of timing decisions in a zero-sum game.
Tikuisis, Peter; Vartanian, Oshin; Mandel, David R
2014-09-01
The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act). Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown. Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome. When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time. The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty. Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.
Competitive learning with pairwise constraints.
Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep
2013-01-01
Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
Analysis of data mining classification by comparison of C4.5 and ID algorithms
NASA Astrophysics Data System (ADS)
Sudrajat, R.; Irianingsih, I.; Krisnawan, D.
2017-01-01
The rapid development of information technology, triggered by the intensive use of information technology. For example, data mining widely used in investment. Many techniques that can be used assisting in investment, the method that used for classification is decision tree. Decision tree has a variety of algorithms, such as C4.5 and ID3. Both algorithms can generate different models for similar data sets and different accuracy. C4.5 and ID3 algorithms with discrete data provide accuracy are 87.16% and 99.83% and C4.5 algorithm with numerical data is 89.69%. C4.5 and ID3 algorithms with discrete data provides 520 and 598 customers and C4.5 algorithm with numerical data is 546 customers. From the analysis of the both algorithm it can classified quite well because error rate less than 15%.
Analysis and design of second-order sliding-mode algorithms for quadrotor roll and pitch estimation.
Chang, Jing; Cieslak, Jérôme; Dávila, Jorge; Zolghadri, Ali; Zhou, Jun
2017-11-01
The problem addressed in this paper is that of quadrotor roll and pitch estimation without any assumption about the knowledge of perturbation bounds when Inertial Measurement Units (IMU) data or position measurements are available. A Smooth Sliding Mode (SSM) algorithm is first designed to provide reliable estimation under a smooth disturbance assumption. This assumption is next relaxed with the second proposed Adaptive Sliding Mode (ASM) algorithm that deals with disturbances of unknown bounds. In addition, the analysis of the observers are extended to the case where measurements are corrupted by bias and noise. The gains of the proposed algorithms were deduced from the Lyapunov function. Furthermore, some useful guidelines are provided for the selection of the observer turning parameters. The performance of these two approaches is evaluated using a nonlinear simulation model and considering either accelerometer or position measurements. The simulation results demonstrate the benefits of the proposed solutions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.
Edge Pushing is Equivalent to Vertex Elimination for Computing Hessians
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Mu; Pothen, Alex; Hovland, Paul
We prove the equivalence of two different Hessian evaluation algorithms in AD. The first is the Edge Pushing algorithm of Gower and Mello, which may be viewed as a second order Reverse mode algorithm for computing the Hessian. In earlier work, we have derived the Edge Pushing algorithm by exploiting a Reverse mode invariant based on the concept of live variables in compiler theory. The second algorithm is based on eliminating vertices in a computational graph of the gradient, in which intermediate variables are successively eliminated from the graph, and the weights of the edges are updated suitably. We provemore » that if the vertices are eliminated in a reverse topological order while preserving symmetry in the computational graph of the gradient, then the Vertex Elimination algorithm and the Edge Pushing algorithm perform identical computations. In this sense, the two algorithms are equivalent. This insight that unifies two seemingly disparate approaches to Hessian computations could lead to improved algorithms and implementations for computing Hessians. Read More: http://epubs.siam.org/doi/10.1137/1.9781611974690.ch11« less
Design and implementation of intelligent electronic warfare decision making algorithm
NASA Astrophysics Data System (ADS)
Peng, Hsin-Hsien; Chen, Chang-Kuo; Hsueh, Chi-Shun
2017-05-01
Electromagnetic signals and the requirements of timely response have been a rapid growth in modern electronic warfare. Although jammers are limited resources, it is possible to achieve the best electronic warfare efficiency by tactical decisions. This paper proposes the intelligent electronic warfare decision support system. In this work, we develop a novel hybrid algorithm, Digital Pheromone Particle Swarm Optimization, based on Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Shuffled Frog Leaping Algorithm (SFLA). We use PSO to solve the problem and combine the concept of pheromones in ACO to accumulate more useful information in spatial solving process and speed up finding the optimal solution. The proposed algorithm finds the optimal solution in reasonable computation time by using the method of matrix conversion in SFLA. The results indicated that jammer allocation was more effective. The system based on the hybrid algorithm provides electronic warfare commanders with critical information to assist commanders in effectively managing the complex electromagnetic battlefield.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.
2010-02-28
Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shaobu; Huang, Renke; Huang, Zhenyu
The objective of this research work is to develop decoupled modulation control methods for damping inter-area oscillations with low frequencies, so the damping control can be more effective and easier to design with less interference among different oscillation modes in the power system. A signal-decoupling algorithm was developed that can enable separation of multiple oscillation frequency contents and extraction of a “pure” oscillation frequency mode that are fed into Power System Stabilizers (PSSs) as the modulation input signals. As a result, instead of introducing interferences between different oscillation modes from the traditional approaches, the output of the new PSS modulationmore » control signal mainly affects only one oscillation mode of interest. The new decoupled modulation damping control algorithm has been successfully developed and tested on the standard IEEE 4-machine 2-area test system and a minniWECC system. The results are compared against traditional modulation controls, which demonstrates the validity and effectiveness of the newly-developed decoupled modulation damping control algorithm.« less
The application of Markov decision process with penalty function in restaurant delivery robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Hu, Zhen; Wang, Ying
2017-05-01
As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.
Safety of the Wearable Cardioverter Defibrillator (WCD) in Patients with Implanted Pacemakers.
Schmitt, Joern; Abaci, Guezine; Johnson, Victoria; Erkapic, Damir; Gemein, Christopher; Chasan, Ritvan; Weipert, Kay; Hamm, Christian W; Klein, Helmut U
2017-03-01
The wearable cardioverter defibrillator (WCD) is an important approach for better risk stratification, applied to patients considered to be at high risk of sudden arrhythmic death. Patients with implanted pacemakers may also become candidates for use of the WCD. However, there is a potential risk that pacemaker signals may mislead the WCD detection algorithm and cause inappropriate WCD shock delivery. The aim of the study was to test the impact of different types of pacing, various right ventricular (RV) lead positions, and pacing modes for potential misleading of the WCD detection algorithm. Sixty patients with implanted pacemakers received the WCD for a short time and each pacing mode (AAI, VVI, and DDD) was tested for at least 30 seconds in unipolar and bipolar pacing configuration. In case of triggering the WCD detection algorithm and starting the sequence of arrhythmia alarms, shock delivery was prevented by pushing of the response buttons. In six of 60 patients (10%), continuous unipolar pacing in DDD mode triggered the WCD detection algorithm. In no patient, triggering occurred with bipolar DDD pacing, unipolar and bipolar AAI, and VVI pacing. Triggering was independent of pacing amplitude, RV pacing lead position, and pulse generator implantation site. Unipolar DDD pacing bears a high risk of false triggering of the WCD detection algorithm. Other types of unipolar pacing and all bipolar pacing modes do not seem to mislead the WCD detection algorithm. Therefore, patients with no reprogrammable unipolar DDD pacing should not become candidates for the WCD. © 2016 Wiley Periodicals, Inc.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Comprehensive analysis of statistical and model-based overlay lot disposition methods
NASA Astrophysics Data System (ADS)
Crow, David A.; Flugaur, Ken; Pellegrini, Joseph C.; Joubert, Etienne L.
2001-08-01
Overlay lot disposition algorithms in lithography occupy some of the highest leverage decision points in the microelectronic manufacturing process. In a typical large volume sub-0.18micrometers fab the lithography lot disposition decision is made about 500 times per day. Each decision will send a lot of wafers either to the next irreversible process step or back to rework in an attempt to improve unacceptable overlay performance. In the case of rework, the intention is that the reworked lot will represent better yield (and thus more value) than the original lot and that the enhanced lot value will exceed the cost of rework. Given that the estimated cost of reworking a critical-level lot is around 10,000 (based upon the opportunity cost of consuming time on a state-of-the-art DUV scanner), we are faced with the implication that the lithography lot disposition decision process impacts up to 5 million per day in decisions. That means that a 1% error rate in this decision process represents over 18 million per year lost in profit for a representative sit. Remarkably, despite this huge leverage, the lithography lot disposition decision algorithm usually receives minimal attention. In many cases, this lack of attention has resulted in the retention of sub-optimal algorithms from earlier process generations and a significant negative impact on the economic output of many high-volume manufacturing sites. An ideal lot- dispositioning algorithm would be an algorithm that results into the best economic decision being made every time - lots would only be reworked where the expected value (EV) of the reworked lot minus the expected value of the original lot exceeds the cost of the rework: EV(reworked lot)- EV(original lot)>COST(rework process) Calculating the above expected values in real-time has generally been deemed too complicated and maintenance-intensive to be practical for fab operations, so a simplified rule is typically used.
Project Delivery System Mode Decision Based on Uncertain AHP and Fuzzy Sets
NASA Astrophysics Data System (ADS)
Kaishan, Liu; Huimin, Li
2017-12-01
The project delivery system mode determines the contract pricing type, project management mode and the risk allocation among all participants. Different project delivery system modes have different characteristics and applicable scope. For the owners, the selection of the delivery mode is the key point to decide whether the project can achieve the expected benefits, it relates to the success or failure of project construction. Under the precondition of comprehensively considering the influence factors of the delivery mode, the model of project delivery system mode decision was set up on the basis of uncertain AHP and fuzzy sets, which can well consider the uncertainty and fuzziness when conducting the index evaluation and weight confirmation, so as to rapidly and effectively identify the most suitable delivery mode according to project characteristics. The effectiveness of the model has been verified via the actual case analysis in order to provide reference for the construction project delivery system mode.
De Simone, Antonio; Senatore, Gaetano; Donnici, Giovanni; Turco, Pietro; Romano, Enrico; Gazzola, Carlo; Stabile, G
2007-01-01
The impact of new algorithms to consistently pace the atrium on the prevention of atrial fibrillation (AF) remains unclear. Our randomized, crossover study compared the efficacy of single- and dual-site atrial pacing, with versus without dynamic atrial overdrive pacing in preventing AF. We studied 72 patients (mean age = 69.6 +/- 6.5 years, 34 men) with sick sinus syndrome (SSS) and paroxysmal or persistent AF, who received dual-chamber pacemakers (PM) equipped with an AF prevention algorithm and two atrial leads placed in the right atrial appendage (RAA), by passive fixation, and in the coronary sinus ostium (CS), by active fixation, respectively. At implant, the patients were randomly assigned to unipolar CS versus RAA pacing. The PM was programmed in DDDR mode 1 month after implant. Each patient underwent four study phases of equal duration: (1) unipolar, single site (CS or RAA) pacing with the AF algorithm ON (atrial lower rate = 0 ppm); (2) unipolar, single site pacing with the AF algorithm OFF (atrial lower rate = 70 bpm); (3) bipolar, dual-site pacing with AF algorithm ON; (4) bipolar, dual-site pacing with the AF algorithm OFF. Among 40 patients (56%), who completed the follow-up (15 +/- 4 months) no difference was observed in the mean number of automatic mode switch (AMS) corrected for the duration of follow-up, in unipolar (5.6 +/- 22.8 vs 2.6 +/- 5.5) or bipolar mode (3.3 +/- 12.7 vs 2.1 +/- 4.9) with, respectively, the algorithm OFF or ON. With the AF prevention algorithm ON, the percentage of atrial pacing increased significantly from 78.7 +/- 22.1% to 92.4 +/- 4.9% (P < 0.001), while the average ventricular heart rate was significantly lower with the algorithm ON (62.4 +/- 17.5 vs 79.9 +/- 3 bpm (P < 0.001). The AF prevention algorithm increased the percentage of atrial pacing significantly, regardless of the atrial pulse configuration and pacing site, while maintaining a slower ventricular heart rate. It had no impact on the number of AMS in the unipolar and bipolar modes in patients with SSS.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
NASA Astrophysics Data System (ADS)
Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter
2016-10-01
We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity curves. Wave motion simulations for two earthquakes are in good agreement with the real data results and confirm the identification of the wave scattering formations to the south of the array, which generate the scattered Love waves visible for all earthquakes.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.
2014-12-01
We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.
Data quality system using reference dictionaries and edit distance algorithms
NASA Astrophysics Data System (ADS)
Karbarz, Radosław; Mulawka, Jan
2015-09-01
The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.
Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V
2018-04-17
Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons.
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-09-02
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance.
PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Zhao, Guang-Yu; Xu, Guo-Qiang; He, Long; Mao, Xi-Wang; Dong, Wei
2016-01-01
Locomotion mode identification is essential for the control of a robotic rehabilitation exoskeletons. This paper proposes an online support vector machine (SVM) optimized by particle swarm optimization (PSO) to identify different locomotion modes to realize a smooth and automatic locomotion transition. A PSO algorithm is used to obtain the optimal parameters of SVM for a better overall performance. Signals measured by the foot pressure sensors integrated in the insoles of wearable shoes and the MEMS-based attitude and heading reference systems (AHRS) attached on the shoes and shanks of leg segments are fused together as the input information of SVM. Based on the chosen window whose size is 200 ms (with sampling frequency of 40 Hz), a three-layer wavelet packet analysis (WPA) is used for feature extraction, after which, the kernel principal component analysis (kPCA) is utilized to reduce the dimension of the feature set to reduce computation cost of the SVM. Since the signals are from two types of different sensors, the normalization is conducted to scale the input into the interval of [0, 1]. Five-fold cross validation is adapted to train the classifier, which prevents the classifier over-fitting. Based on the SVM model obtained offline in MATLAB, an online SVM algorithm is constructed for locomotion mode identification. Experiments are performed for different locomotion modes and experimental results show the effectiveness of the proposed algorithm with an accuracy of 96.00% ± 2.45%. To improve its accuracy, majority vote algorithm (MVA) is used for post-processing, with which the identification accuracy is better than 98.35% ± 1.65%. The proposed algorithm can be extended and employed in the field of robotic rehabilitation and assistance. PMID:27598160
YANA – a software tool for analyzing flux modes, gene-expression and enzyme activities
Schwarz, Roland; Musch, Patrick; von Kamp, Axel; Engels, Bernd; Schirmer, Heiner; Schuster, Stefan; Dandekar, Thomas
2005-01-01
Background A number of algorithms for steady state analysis of metabolic networks have been developed over the years. Of these, Elementary Mode Analysis (EMA) has proven especially useful. Despite its low user-friendliness, METATOOL as a reliable high-performance implementation of the algorithm has been the instrument of choice up to now. As reported here, the analysis of metabolic networks has been improved by an editor and analyzer of metabolic flux modes. Analysis routines for expression levels and the most central, well connected metabolites and their metabolic connections are of particular interest. Results YANA features a platform-independent, dedicated toolbox for metabolic networks with a graphical user interface to calculate (integrating METATOOL), edit (including support for the SBML format), visualize, centralize, and compare elementary flux modes. Further, YANA calculates expected flux distributions for a given Elementary Mode (EM) activity pattern and vice versa. Moreover, a dissection algorithm, a centralization algorithm, and an average diameter routine can be used to simplify and analyze complex networks. Proteomics or gene expression data give a rough indication of some individual enzyme activities, whereas the complete flux distribution in the network is often not known. As such data are noisy, YANA features a fast evolutionary algorithm (EA) for the prediction of EM activities with minimum error, including alerts for inconsistent experimental data. We offer the possibility to include further known constraints (e.g. growth constraints) in the EA calculation process. The redox metabolism around glutathione reductase serves as an illustration example. All software and documentation are available for download at . Conclusion A graphical toolbox and an editor for METATOOL as well as a series of additional routines for metabolic network analyses constitute a new user-friendly software for such efforts. PMID:15929789
NASA Astrophysics Data System (ADS)
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-08-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-01-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635
On-line, adaptive state estimator for active noise control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1994-01-01
Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.
2014-10-02
were described in (Balaban, Saxena, Bansal , Goebel, & Curran, 2009; Poll et al., 2011), and, in the course of this work, three types of sensor faults...enabled decision making algorithms. International Journal of Prognostics and Health Management, 4(1). Balaban, E., Saxena, A., Bansal , P., Goebel, K. F
A modern control theory based algorithm for control of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1987-01-01
A digital computer-based state variable controller was designed and applied to the 70-m antenna axis servos. The general equations and structure of the algorithm and provisions for alternate position error feedback modes to accommodate intertarget slew, encoder referenced tracking, and precision tracking modes are descibed. Development of the discrete time domain control model and computation of estimator and control gain parameters based on closed loop pole placement criteria are discussed. The new algorithm was successfully implemented and tested in the 70-m antenna at Deep Space Network station 63 in Spain.
Cargo Logistics Airlift Systems Study (CLASS). Volume 2: Case study approach and results
NASA Technical Reports Server (NTRS)
Burby, R. J.; Kuhlman, W. H.
1978-01-01
Models of transportation mode decision making were developed. The user's view of the present and future air cargo systems is discussed. Issues summarized include: (1) organization of the distribution function; (2) mode choice decision making; (3) air freight system; and (4) the future of air freight.
Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm
NASA Astrophysics Data System (ADS)
Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.
2011-12-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.
Probability Distributions over Cryptographic Protocols
2009-06-01
Artificial Immune Algorithm . . . . . . . . . . . . . . . . . . . 9 3 Design Decisions 11 3.1 Common Ground...creation algorithm for unbounded distribution . . . . . . . 24 4.2 Message creation algorithm for unbounded naive distribution . . . . 24 4.3 Protocol...creation algorithm for intended-run distributions . . . . . . 26 4.4 Protocol and message creation algorithm for realistic distribution . . 32 ix THIS
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Color enhancement for portable LCD displays in low-power mode
NASA Astrophysics Data System (ADS)
Shih, Kuang-Tsu; Huang, Tai-Hsiang; Chen, Homer H.
2011-09-01
Switching the backlight of handheld devices to low power mode saves energy but affects the color appearance of an image. In this paper, we consider the chroma degradation problem and propose an enhancement algorithm that incorporates the CIECAM02 appearance model to quantitatively characterize the problem. In the proposed algorithm, we enhance the color appearance of the image in low power mode by weighted linear superposition of the chroma of the image and that of the estimated dim-backlight image. Subjective tests are carried out to determine the perceptually optimal weighting and prove the effectiveness of our framework.
Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.
Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei
2018-06-15
Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.
Managing and learning with multiple models: Objectives and optimization algorithms
Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.
2011-01-01
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.
A controllable sensor management algorithm capable of learning
NASA Astrophysics Data System (ADS)
Osadciw, Lisa A.; Veeramacheneni, Kalyan K.
2005-03-01
Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.
Otsuka, Momoka; Uchida, Yuki; Kawaguchi, Takumi; Taniguchi, Eitaro; Kawaguchi, Atsushi; Kitani, Shingo; Itou, Minoru; Oriishi, Tetsuharu; Kakuma, Tatsuyuki; Tanaka, Suiko; Yagi, Minoru; Sata, Michio
2012-10-01
Dietary habits are involved in the development of chronic inflammation; however, the impact of dietary profiles of hepatitis C virus carriers with persistently normal alanine transaminase levels (HCV-PNALT) remains unclear. The decision-tree algorithm is a data-mining statistical technique, which uncovers meaningful profiles of factors from a data collection. We aimed to investigate dietary profiles associated with HCV-PNALT using a decision-tree algorithm. Twenty-seven HCV-PNALT and 41 patients with chronic hepatitis C were enrolled in this study. Dietary habit was assessed using a validated semiquantitative food frequency questionnaire. A decision-tree algorithm was created by dietary variables, and was evaluated by area under the receiver operating characteristic curve analysis (AUROC). In multivariate analysis, fish to meat ratio, dairy product and cooking oils were identified as independent variables associated with HCV-PNALT. The decision-tree algorithm was created with two variables: a fish to meat ratio and cooking oils/ideal bodyweight. When subjects showed a fish to meat ratio of 1.24 or more, 68.8% of the subjects were HCV-PNALT. On the other hand, 11.5% of the subjects were HCV-PNALT when subjects showed a fish to meat ratio of less than 1.24 and cooking oil/ideal bodyweight of less than 0.23 g/kg. The difference in the proportion of HCV-PNALT between these groups are significant (odds ratio 16.87, 95% CI 3.40-83.67, P = 0.0005). Fivefold cross-validation of the decision-tree algorithm showed an AUROC of 0.6947 (95% CI 0.5656-0.8238, P = 0.0067). The decision-tree algorithm disclosed that fish to meat ratio and cooking oil/ideal bodyweight were associated with HCV-PNALT. © 2012 The Japan Society of Hepatology.
Fast algorithm for bilinear transforms in optics
NASA Astrophysics Data System (ADS)
Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana
2000-10-01
The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.
He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan
2015-03-12
Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-07
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano
2016-07-01
Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.
NASA Astrophysics Data System (ADS)
Feigenbaum, Eyal; Hiszpanski, Anna M.
2017-07-01
A phase accumulation tracking (PAT) algorithm is proposed and demonstrated for the retrieval of the effective index of fishnet metamaterials (FMMs) in order to avoid the multi-branch uncertainty problem. This algorithm tracks the phase and amplitude of the dominant propagation mode across the FMM slab. The suggested PAT algorithm applies to resonant guided wave networks having only one mode that carries the light between the two slab ends, where the FMM is one example of this metamaterials sub-class. The effective index is a net effect of positive and negative accumulated phase in the alternating FMM metal and dielectric layers, with a negative effective index occurring when negative phase accumulation dominates.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
Bouslimi, D; Coatrieux, G; Roux, Ch
2011-01-01
In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.
Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.
Wang, Tao; Xie, Wenfang; Zhang, Youmin
2012-05-01
In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
A Brightness-Referenced Star Identification Algorithm for APS Star Trackers
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-01-01
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950
A brightness-referenced star identification algorithm for APS star trackers.
Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning
2014-10-08
Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4~5 times that of the pyramid method and 35~37 times that of the geometric method.
Decision tree methods: applications for classification and prediction.
Song, Yan-Yan; Lu, Ying
2015-04-25
Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.
Decision Accuracy in Computer-Mediated versus Face-to-Face Decision-Making Teams.
Hedlund; Ilgen; Hollenbeck
1998-10-01
Changes in the way organizations are structured and advances in communication technologies are two factors that have altered the conditions under which group decisions are made. Decisions are increasingly made by teams that have a hierarchical structure and whose members have different areas of expertise. In addition, many decisions are no longer made via strictly face-to-face interaction. The present study examines the effects of two modes of communication (face-to-face or computer-mediated) on the accuracy of teams' decisions. The teams are characterized by a hierarchical structure and their members differ in expertise consistent with the framework outlined in the Multilevel Theory of team decision making presented by Hollenbeck, Ilgen, Sego, Hedlund, Major, and Phillips (1995). Sixty-four four-person teams worked for 3 h on a computer simulation interacting either face-to-face (FtF) or over a computer network. The communication mode had mixed effects on team processes in that members of FtF teams were better informed and made recommendations that were more predictive of the correct team decision, but leaders of CM teams were better able to differentiate staff members on the quality of their decisions. Controlling for the negative impact of FtF communication on staff member differentiation increased the beneficial effect of the FtF mode on overall decision making accuracy. Copyright 1998 Academic Press.
Robust and real-time rotor control with magnetic bearings
NASA Technical Reports Server (NTRS)
Sinha, A.; Wang, K. W.; Mease, K. L.
1991-01-01
This paper deals with the sliding mode control of a rigid rotor via radial magnetic bearings. The digital control algorithm and the results from numerical simulations are presented for an experimental rig. The experimental system which has been set up to digitally implement and validate the sliding mode control algorithm is described. Two methods for the development of control softwares are presented. Experimental results for individual rotor axis are discussed.
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
An efficient mode-splitting method for a curvilinear nearshore circulation model
Shi, Fengyan; Kirby, James T.; Hanes, Daniel M.
2007-01-01
A mode-splitting method is applied to the quasi-3D nearshore circulation equations in generalized curvilinear coordinates. The gravity wave mode and the vorticity wave mode of the equations are derived using the two-step projection method. Using an implicit algorithm for the gravity mode and an explicit algorithm for the vorticity mode, we combine the two modes to derive a mixed difference–differential equation with respect to surface elevation. McKee et al.'s [McKee, S., Wall, D.P., and Wilson, S.K., 1996. An alternating direction implicit scheme for parabolic equations with mixed derivative and convective terms. J. Comput. Phys., 126, 64–76.] ADI scheme is then used to solve the parabolic-type equation in dealing with the mixed derivative and convective terms from the curvilinear coordinate transformation. Good convergence rates are found in two typical cases which represent respectively the motions dominated by the gravity mode and the vorticity mode. Time step limitations imposed by the vorticity convective Courant number in vorticity-mode-dominant cases are discussed. Model efficiency and accuracy are verified in model application to tidal current simulations in San Francisco Bight.
Automated aberration correction of arbitrary laser modes in high numerical aperture systems.
Hering, Julian; Waller, Erik H; Von Freymann, Georg
2016-12-12
Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.
Selection of experimental modal data sets for damage detection via model update
NASA Technical Reports Server (NTRS)
Doebling, S. W.; Hemez, F. M.; Barlow, M. S.; Peterson, L. D.; Farhat, C.
1993-01-01
When using a finite element model update algorithm for detecting damage in structures, it is important that the experimental modal data sets used in the update be selected in a coherent manner. In the case of a structure with extremely localized modal behavior, it is necessary to use both low and high frequency modes, but many of the modes in between may be excluded. In this paper, we examine two different mode selection strategies based on modal strain energy, and compare their success to the choice of an equal number of modes based merely on lowest frequency. Additionally, some parameters are introduced to enable a quantitative assessment of the success of our damage detection algorithm when using the various set selection criteria.
A Wave Diagnostics in Geophysics: Algorithmic Extraction of Atmosphere Disturbance Modes
NASA Astrophysics Data System (ADS)
Leble, S.; Vereshchagin, S.
2018-04-01
The problem of diagnostics in geophysics is discussed and a proposal based on dynamic projecting operators technique is formulated. The general exposition is demonstrated by an example of symbolic algorithm for the wave and entropy modes in the exponentially stratified atmosphere. The novel technique is developed as a discrete version for the evolution operator and the corresponding projectors via discrete Fourier transformation. Its explicit realization for directed modes in exponential one-dimensional atmosphere is presented via the correspondent projection operators in its discrete version in terms of matrices with a prescribed action on arrays formed from observation tables. A simulation based on opposite directed (upward and downward) wave train solution is performed and the modes' extraction from a mixture is illustrated.
Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making.
Shim, Youngbo; Kim, Gon-Woo
2018-03-29
In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC) method with selective decision-making. The simulation and experiment results show the validity of the proposed method.
Application of majority voting and consensus voting algorithms in N-version software
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.
2018-05-01
N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.
Investigation of effective decision criteria for multiobjective optimization in IMRT.
Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H
2011-06-01
To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D(max) - D(min)), decision criteria were found to be most effective for keeping targets uniform. Using target gEUD decision criteria resulted in much lower OAR doses but much higher target dose variation. EUD(alpha,beta) based decision criteria focused on a region of plan space that was a compromise between target and OAR objectives. None of these target decision criteria dominated plans using other criteria, but only focused on approaching a different area of the Pareto front. The choice of decision criteria implemented in the MOEA had a significant impact on the region explored and the rate of convergence toward the Pareto front. When more decision criteria, anticorrelated decision criteria, or decision criteria with insufficient information were implemented, inferior populations are resulted. When more informative decision criteria were used, such as gEUD, EUD(alpha,beta), target dose range, and mean dose, MOEA optimizations focused on approaching different regions of the Pareto front, but did not dominate each other. Using simple OAR decision criteria and target EUD(alpha,beta) decision criteria demonstrated the potential to generate IMRT plans that significantly reduce dose to OARs while achieving the same or better tumor control when clinical requirements on target dose variance can be met or relaxed.
NASA Astrophysics Data System (ADS)
Siomos, Nikolaos; Filoglou, Maria; Poupkou, Anastasia; Liora, Natalia; Dimopoulos, Spyros; Melas, Dimitris; Chaikovsky, Anatoli; Balis, Dimitris
2015-04-01
Vertical profiles of the aerosol mass concentration derived by a retrieval algorithm that uses combined sunphotometer and LIDAR data (LIRIC) were used in order to validate the mass concentration profiles estimated by the air quality model CAMx. LIDAR and CIMEL measurements of the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki were used for this validation.The aerosol mass concentration profiles of the fine and coarse mode derived by CAMx were compared with the respective profiles derived by the retrieval algorithm. For the coarse mode particles, forecasts of the Saharan dust transportation model BSC-DREAM8bV2 were also taken into account. Each of the retrieval algorithm's profiles were matched to the models' profile with the best agreement within a time window of four hours before and after the central measurement. OPAC, a software than can provide optical properties of aerosol mixtures, was also employed in order to calculate the angstrom exponent and the lidar ratio values for 355nm and 532nm for each of the model's profiles aiming in a comparison with the angstrom exponent and the lidar ratio values derived by the retrieval algorithm for each measurement. The comparisons between the fine mode aerosol concentration profiles resulted in a good agreement between CAMx and the retrieval algorithm, with the vertical mean bias error never exceeding 7 μgr/m3. Concerning the aerosol coarse mode concentration profiles both CAMx and BSC-DREAM8bV2 values are severely underestimated, although, in cases of Saharan dust transportation events there is an agreement between the profiles of BSC-DREAM8bV2 model and the retrieval algorithm.
Multi-test decision tree and its application to microarray data classification.
Czajkowski, Marcin; Grześ, Marek; Kretowski, Marek
2014-05-01
The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 14 datasets by an average 6%. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts. Copyright © 2014 Elsevier B.V. All rights reserved.
Objective consensus from decision trees.
Putora, Paul Martin; Panje, Cedric M; Papachristofilou, Alexandros; Dal Pra, Alan; Hundsberger, Thomas; Plasswilm, Ludwig
2014-12-05
Consensus-based approaches provide an alternative to evidence-based decision making, especially in situations where high-level evidence is limited. Our aim was to demonstrate a novel source of information, objective consensus based on recommendations in decision tree format from multiple sources. Based on nine sample recommendations in decision tree format a representative analysis was performed. The most common (mode) recommendations for each eventuality (each permutation of parameters) were determined. The same procedure was applied to real clinical recommendations for primary radiotherapy for prostate cancer. Data was collected from 16 radiation oncology centres, converted into decision tree format and analyzed in order to determine the objective consensus. Based on information from multiple sources in decision tree format, treatment recommendations can be assessed for every parameter combination. An objective consensus can be determined by means of mode recommendations without compromise or confrontation among the parties. In the clinical example involving prostate cancer therapy, three parameters were used with two cut-off values each (Gleason score, PSA, T-stage) resulting in a total of 27 possible combinations per decision tree. Despite significant variations among the recommendations, a mode recommendation could be found for specific combinations of parameters. Recommendations represented as decision trees can serve as a basis for objective consensus among multiple parties.
NASA Astrophysics Data System (ADS)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support
Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.
2014-01-01
Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768
Vehicle Mode and Driving Activity Detection Based on Analyzing Sensor Data of Smartphones.
Lu, Dang-Nhac; Nguyen, Duc-Nhan; Nguyen, Thi-Hau; Nguyen, Ha-Nam
2018-03-29
In this paper, we present a flexible combined system, namely the Vehicle mode-driving Activity Detection System (VADS), that is capable of detecting either the current vehicle mode or the current driving activity of travelers. Our proposed system is designed to be lightweight in computation and very fast in response to the changes of travelers' vehicle modes or driving events. The vehicle mode detection module is responsible for recognizing both motorized vehicles, such as cars, buses, and motorbikes, and non-motorized ones, for instance, walking, and bikes. It relies only on accelerometer data in order to minimize the energy consumption of smartphones. By contrast, the driving activity detection module uses the data collected from the accelerometer, gyroscope, and magnetometer of a smartphone to detect various driving activities, i.e., stopping, going straight, turning left, and turning right. Furthermore, we propose a method to compute the optimized data window size and the optimized overlapping ratio for each vehicle mode and each driving event from the training datasets. The experimental results show that this strategy significantly increases the overall prediction accuracy. Additionally, numerous experiments are carried out to compare the impact of different feature sets (time domain features, frequency domain features, Hjorth features) as well as the impact of various classification algorithms (Random Forest, Naïve Bayes, Decision tree J48, K Nearest Neighbor, Support Vector Machine) contributing to the prediction accuracy. Our system achieves an average accuracy of 98.33% in detecting the vehicle modes and an average accuracy of 98.95% in recognizing the driving events of motorcyclists when using the Random Forest classifier and a feature set containing time domain features, frequency domain features, and Hjorth features. Moreover, on a public dataset of HTC company in New Taipei, Taiwan, our framework obtains the overall accuracy of 97.33% that is considerably higher than that of the state-of the art.
Shalom, Erez; Shahar, Yuval; Parmet, Yisrael; Lunenfeld, Eitan
2015-04-01
To quantify the effect of a new continuous-care guideline (GL)-application engine, the Picard decision support system (DSS) engine, on the correctness and completeness of clinicians' decisions relative to an established clinical GL, and to assess the clinicians' attitudes towards a specific DSS. Thirty-six clinicians, including residents at different training levels and board-certified specialists at an academic OB/GYN department that handles around 15,000 deliveries annually, agreed to evaluate our continuous-care guideline-based DSS and to perform a cross-over assessment of the effects of using our guideline-based DSS. We generated electronic patient records that realistically simulated the longitudinal course of six different clinical scenarios of the preeclampsia/eclampsia/toxemia (PET) GL, encompassing 60 different decision points in total. Each clinician managed three scenarios manually without the Picard DSS engine (Non-DSS mode) and three scenarios when assisted by the Picard DSS engine (DSS mode). The main measures in both modes were correctness and completeness of actions relative to the PET GL. Correctness was further decomposed into necessary and redundant actions, relative to the guideline and the actual patient data. At the end of the assessment, a questionnaire was administered to the clinicians to assess their perceptions regarding use of the DSS. With respect to completeness, the clinicians applied approximately 41% of the GL's recommended actions in the non-DSS mode. Completeness increased to the performance of approximately 93% of the guideline's recommended actions, when using the DSS mode. With respect to correctness, approximately 94.5% of the clinicians' decisions in the non-DSS mode were correct. However, these included 68% of the actions that were correct but redundant, given the patient's data (e.g., repeating tests that had been performed), and 27% of the actions, which were necessary in the context of the GL and of the given scenario. Only 5.5% of the decisions were definite errors. In the DSS mode, 94% of the clinicians' decisions were correct, which included 3% that were correct but redundant, and 91% of the actions that were correct and necessary in the context of the GL and of the given scenario. Only 6% of the DSS-mode decisions were erroneous. The DSS was assessed by the clinicians as potentially useful. Support from the GL-based DSS led to uniformity in the quality of the decisions, regardless of the particular clinician, any particular clinical scenario, any particular decision point, or any decision type within the scenarios. Using the DSS dramatically enhances completeness (i.e., performance of guideline-based recommendations) and seems to prevent the performance of most of the redundant actions, but does not seem to affect the rate of performance of incorrect actions. The redundancy rate is enhanced by similar recent findings in recent studies. Clinicians mostly find this support to be potentially useful for their daily practice. A continuous-care GL-based DSS, such as the Picard DSS engine, has the potential to prevent most errors of omission by ensuring uniformly high quality of clinical decision making (relative to a GL-based norm), due to the increased adherence (i.e., completeness) to the GL, and most of the errors of commission that increase therapy costs, by reducing the rate of redundant actions. However, to prevent clinical errors of commission, the DSS needs to be accompanied by additional modules, such as automated control of the quality of the physician's actual actions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.
Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem
2017-12-30
Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.
Multiparty Quantum Key Agreement Based on Quantum Search Algorithm
Cao, Hao; Ma, Wenping
2017-01-01
Quantum key agreement is an important topic that the shared key must be negotiated equally by all participants, and any nontrivial subset of participants cannot fully determine the shared key. To date, the embed modes of subkey in all the previously proposed quantum key agreement protocols are based on either BB84 or entangled states. The research of the quantum key agreement protocol based on quantum search algorithms is still blank. In this paper, on the basis of investigating the properties of quantum search algorithms, we propose the first quantum key agreement protocol whose embed mode of subkey is based on a quantum search algorithm known as Grover’s algorithm. A novel example of protocols with 5 – party is presented. The efficiency analysis shows that our protocol is prior to existing MQKA protocols. Furthermore it is secure against both external attack and internal attacks. PMID:28332610
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.
Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E
2010-09-17
Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fuzzy bilevel programming with multiple non-cooperative followers: model, algorithm and application
NASA Astrophysics Data System (ADS)
Ke, Hua; Huang, Hu; Ralescu, Dan A.; Wang, Lei
2016-04-01
In centralized decision problems, it is not complicated for decision-makers to make modelling technique selections under uncertainty. When a decentralized decision problem is considered, however, choosing appropriate models is no longer easy due to the difficulty in estimating the other decision-makers' inconclusive decision criteria. These decision criteria may vary with different decision-makers because of their special risk tolerances and management requirements. Considering the general differences among the decision-makers in decentralized systems, we propose a general framework of fuzzy bilevel programming including hybrid models (integrated with different modelling methods in different levels). Specially, we discuss two of these models which may have wide applications in many fields. Furthermore, we apply the proposed two models to formulate a pricing decision problem in a decentralized supply chain with fuzzy coefficients. In order to solve these models, a hybrid intelligent algorithm integrating fuzzy simulation, neural network and particle swarm optimization based on penalty function approach is designed. Some suggestions on the applications of these models are also presented.
Finite time control for MIMO nonlinear system based on higher-order sliding mode.
Liu, Xiangjie; Han, Yaozhen
2014-11-01
Considering a class of MIMO uncertain nonlinear system, a novel finite time stable control algorithm is proposed based on higher-order sliding mode concept. The higher-order sliding mode control problem of MIMO nonlinear system is firstly transformed into finite time stability problem of multivariable system. Then continuous control law, which can guarantee finite time stabilization of nominal integral chain system, is employed. The second-order sliding mode is used to overcome the system uncertainties. High frequency chattering phenomenon of sliding mode is greatly weakened, and the arbitrarily fast convergence is reached. The finite time stability is proved based on the quadratic form Lyapunov function. Examples concerning the triple integral chain system with uncertainty and the hovercraft trajectory tracking are simulated respectively to verify the effectiveness and the robustness of the proposed algorithm. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach
NASA Astrophysics Data System (ADS)
Demirer, Nazli
The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Phase retrieval in generalized optical interferometry systems.
Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick
2018-02-05
Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1992-01-01
Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.
Decentralized learning in Markov games.
Vrancx, Peter; Verbeeck, Katja; Nowé, Ann
2008-08-01
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.
Single Point vs. Mapping Approach for Spectral Cytopathology (SCP)
Schubert, Jennifer M.; Mazur, Antonella I.; Bird, Benjamin; Miljković, Miloš; Diem, Max
2011-01-01
In this paper we describe the advantages of collecting infrared microspectral data in imaging mode opposed to point mode. Imaging data are processed using the PapMap algorithm, which co-adds pixel spectra that have been scrutinized for R-Mie scattering effects as well as other constraints. The signal-to-noise quality of PapMap spectra will be compared to point spectra for oral mucosa cells deposited onto low-e slides. Also the effects of software atmospheric correction will be discussed. Combined with the PapMap algorithm, data collection in imaging mode proves to be a superior method for spectral cytopathology. PMID:20449833
Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.
Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun
2009-05-01
Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.
He, Xiao-Ou; D’Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan
2015-01-01
Background: Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. Aims: We examined how two different SIAs may influence decision making among primary-care physicians. Methods: Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. Results: We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a ‘normal’ interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. Conclusions: This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently. PMID:25763716
NASA Astrophysics Data System (ADS)
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
Aid decision algorithms to estimate the risk in congenital heart surgery.
Ruiz-Fernández, Daniel; Monsalve Torra, Ana; Soriano-Payá, Antonio; Marín-Alonso, Oscar; Triana Palencia, Eddy
2016-04-01
In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
A fast-initializing digital equalizer with on-line tracking for data communications
NASA Technical Reports Server (NTRS)
Houts, R. C.; Barksdale, W. J.
1974-01-01
A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.
Design Analysis Kit for Optimization and Terascale Applications 6.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-19
Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to: (1) enhance understanding of risk, (2) improve products, and (3) assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a computational model. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, risk analysis, and quantification of margins and uncertainty with such models. It directly supports verificationmore » and validation activities. The algorithms implemented in Dakota aim to address challenges in performing these analyses with complex science and engineering models from desktop to high performance computers.« less
Stützer, Paul Philipp; Berlit, Sebastian; Lis, Stefanie; Schmahl, Christian; Sütterlin, Marc; Tuschy, Benjamin
2017-05-01
To investigate sociopsychological factors of women undergoing a caesarean section on maternal request (CSMR). Twenty-eight women who underwent CSMR and 29 women with vaginal delivery (VD) filled in standardized questionnaires concerning psychological burden (SCL-R 90), fear of childbirth (W-DEQ, STAI), personality structure (HEXACO-Pi-R) and social support (F-SozU) as well as one questionnaire assessing potential factors influencing their mode of delivery. Women with CSMR were older (36.5 ± 5.4 vs. 30.6 ± 5.2 years; p < 0.001) and suffered more from fear of childbirth (W-DEQ 4.3 ± 0.8 vs. 3.7 ± 1.2; p = 0.041), concerns for their child (W-DEQ 2.0 ± 1.5 vs. 1.3 ± 0.7; p = 0.026) and appraised the birth less negative (W-DEQ 2.0 ± 0.7 vs. 2.7 ± 1.1; p = 0.008). The majority of parturients had chosen their preferred mode of delivery before pregnancy (CS 61% vs. VD 82%, p = 0.328). In the decision-making process for the mode of delivery, the advice of the partner (85 and 90%) played an important role. 82% of the women who delivered via CSMR did not regret the decision for this mode of delivery. Women who underwent CS had higher fear of childbirth and appraised the birth less negative. The majority did not regret the decision for the CS and would even choose this mode of delivery for their next pregnancy. Although the partner and the physician seem to be important in the decision process for of the mode of delivery, reasons for the choice for CSMR appear to be multifactorial.
Network-centric decision architecture for financial or 1/f data models
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Massey, Stoney; Case, Carl T.; Songy, Claude G.
2002-12-01
This paper presents a decision architecture algorithm for training neural equation based networks to make autonomous multi-goal oriented, multi-class decisions. These architectures make decisions based on their individual goals and draw from the same network centric feature set. Traditionally, these architectures are comprised of neural networks that offer marginal performance due to lack of convergence of the training set. We present an approach for autonomously extracting sample points as I/O exemplars for generation of multi-branch, multi-node decision architectures populated by adaptively derived neural equations. To test the robustness of this architecture, open source data sets in the form of financial time series were used, requiring a three-class decision space analogous to the lethal, non-lethal, and clutter discrimination problem. This algorithm and the results of its application are presented here.
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
Artificial intelligence in cardiology.
Bonderman, Diana
2017-12-01
Decision-making is complex in modern medicine and should ideally be based on available data, structured knowledge and proper interpretation in the context of an individual patient. Automated algorithms, also termed artificial intelligence that are able to extract meaningful patterns from data collections and build decisions upon identified patterns may be useful assistants in clinical decision-making processes. In this article, artificial intelligence-based studies in clinical cardiology are reviewed. The text also touches on the ethical issues and speculates on the future roles of automated algorithms versus clinicians in cardiology and medicine in general.
Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing
2017-12-26
As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.
Zhang, Yao; Tang, Shengjing; Guo, Jie
2017-11-01
In this paper, a novel adaptive-gain fast super-twisting (AGFST) sliding mode attitude control synthesis is carried out for a reusable launch vehicle subject to actuator faults and unknown disturbances. According to the fast nonsingular terminal sliding mode surface (FNTSMS) and adaptive-gain fast super-twisting algorithm, an adaptive fault tolerant control law for the attitude stabilization is derived to protect against the actuator faults and unknown uncertainties. Firstly, a second-order nonlinear control-oriented model for the RLV is established by feedback linearization method. And on the basis a fast nonsingular terminal sliding mode (FNTSM) manifold is designed, which provides fast finite-time global convergence and avoids singularity problem as well as chattering phenomenon. Based on the merits of the standard super-twisting (ST) algorithm and fast reaching law with adaption, a novel adaptive-gain fast super-twisting (AGFST) algorithm is proposed for the finite-time fault tolerant attitude control problem of the RLV without any knowledge of the bounds of uncertainties and actuator faults. The important feature of the AGFST algorithm includes non-overestimating the values of the control gains and faster convergence speed than the standard ST algorithm. A formal proof of the finite-time stability of the closed-loop system is derived using the Lyapunov function technique. An estimation of the convergence time and accurate expression of convergence region are also provided. Finally, simulations are presented to illustrate the effectiveness and superiority of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
One Way of Thinking About Decision Making.
ERIC Educational Resources Information Center
Dalis, Gus T.; Strasser, Ben B.
The authors present the DALSTRA model of decision making, a descriptive statement of ways individuals or groups respond to different kinds of decision-making problems they encounter. Decision making is viewed in two phases: the decision-making antecedents (whether to decide, how to decide) and the modes of decision making (Chance/Impulse,…
NASA Astrophysics Data System (ADS)
Marr-Lyon, Mark J.; Thiessen, David B.; Blonigen, Florian J.; Marston, Philip L.
2000-05-01
Electrically conducting, cylindrical liquid bridges in a density-matched, electrically insulating bath were stabilized beyond the Rayleigh-Plateau (RP) limit using electrostatic stresses applied by concentric ring electrodes. A circular liquid cylinder of length L and radius R in real or simulated zero gravity becomes unstable when the slenderness S=L/2R exceeds π. The initial instability involves the growth of the so-called (2, 0) mode of the bridge in which one side becomes thin and the other side rotund. A mode-sensing optical system detects the growth of the (2, 0) mode and an analog feedback system applies the appropriate voltages to a pair of concentric ring electrodes positioned near the ends of the bridge in order to counter the growth of the (2, 0) mode and prevent breakup of the bridge. The conducting bridge is formed between metal disks which are grounded. Three feedback algorithms were tested and each found capable of stabilizing a bridge well beyond the RP limit. All three algorithms stabilized bridges having S as great as 4.3 and the extended bridges broke immediately when feedback was terminated. One algorithm was suitable for stabilization approaching S=4.493… where the (3, 0) mode is predicted to become unstable for cylindrical bridges. For that algorithm the equilibrium shapes of bridges that were slightly under or over inflated corresponded to solutions of the Young-Laplace equation with negligible electrostatic stresses. The electrical conductivity of the bridge liquid need not be large. The conductivity was associated with salt added to the aqueous bridge liquid.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope.
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-04-20
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments.
Possibility expectation and its decision making algorithm
NASA Technical Reports Server (NTRS)
Keller, James M.; Yan, Bolin
1992-01-01
The fuzzy integral has been shown to be an effective tool for the aggregation of evidence in decision making. Of primary importance in the development of a fuzzy integral pattern recognition algorithm is the choice (construction) of the measure which embodies the importance of subsets of sources of evidence. Sugeno fuzzy measures have received the most attention due to the recursive nature of the fabrication of the measure on nested sequences of subsets. Possibility measures exhibit an even simpler generation capability, but usually require that one of the sources of information possess complete credibility. In real applications, such normalization may not be possible, or even desirable. In this report, both the theory and a decision making algorithm for a variation of the fuzzy integral are presented. This integral is based on a possibility measure where it is not required that the measure of the universe be unity. A training algorithm for the possibility densities in a pattern recognition application is also presented with the results demonstrated on the shuttle-earth-space training and testing images.
Semanjski, Ivana; Gautama, Sidharta
2015-07-03
Mobility management represents one of the most important parts of the smart city concept. The way we travel, at what time of the day, for what purposes and with what transportation modes, have a pertinent impact on the overall quality of life in cities. To manage this process, detailed and comprehensive information on individuals' behaviour is needed as well as effective feedback/communication channels. In this article, we explore the applicability of crowdsourced data for this purpose. We apply a gradient boosting trees algorithm to model individuals' mobility decision making processes (particularly concerning what transportation mode they are likely to use). To accomplish this we rely on data collected from three sources: a dedicated smartphone application, a geographic information systems-based web interface and weather forecast data collected over a period of six months. The applicability of the developed model is seen as a potential platform for personalized mobility management in smart cities and a communication tool between the city (to steer the users towards more sustainable behaviour by additionally weighting preferred suggestions) and users (who can give feedback on the acceptability of the provided suggestions, by accepting or rejecting them, providing an additional input to the learning process).
NASA Technical Reports Server (NTRS)
Xu, Xiaoguang; Wang, Jun; Zeng, Jing; Spurr, Robert; Liu, Xiong; Dubovik, Oleg; Li, Li; Li, Zhengqiang; Mishchenko, Michael I.; Siniuk, Aliaksandr;
2015-01-01
A new research algorithm is presented here as the second part of a two-part study to retrieve aerosol microphysical properties from the multispectral and multiangular photopolarimetric measurements taken by Aerosol Robotic Network's (AERONET's) new-generation Sun photometer. The algorithm uses an advanced UNified and Linearized Vector Radiative Transfer Model and incorporates a statistical optimization approach.While the new algorithmhas heritage from AERONET operational inversion algorithm in constraining a priori and retrieval smoothness, it has two new features. First, the new algorithmretrieves the effective radius, effective variance, and total volume of aerosols associated with a continuous bimodal particle size distribution (PSD) function, while the AERONET operational algorithm retrieves aerosol volume over 22 size bins. Second, our algorithm retrieves complex refractive indices for both fine and coarsemodes,while the AERONET operational algorithm assumes a size-independent aerosol refractive index. Mode-resolved refractive indices can improve the estimate of the single-scattering albedo (SSA) for each aerosol mode and thus facilitate the validation of satellite products and chemistry transport models. We applied the algorithm to a suite of real cases over Beijing_RADI site and found that our retrievals are overall consistent with AERONET operational inversions but can offer mode-resolved refractive index and SSA with acceptable accuracy for the aerosol composed by spherical particles. Along with the retrieval using both radiance and polarization, we also performed radiance-only retrieval to demonstrate the improvements by adding polarization in the inversion. Contrast analysis indicates that with polarization, retrieval error can be reduced by over 50% in PSD parameters, 10-30% in the refractive index, and 10-40% in SSA, which is consistent with theoretical analysis presented in the companion paper of this two-part study.
USDA-ARS?s Scientific Manuscript database
A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...
xEMD procedures as a data - Assisted filtering method
NASA Astrophysics Data System (ADS)
Machrowska, Anna; Jonak, Józef
2018-01-01
The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.
Uncertain decision tree inductive inference
NASA Astrophysics Data System (ADS)
Zarban, L.; Jafari, S.; Fakhrahmad, S. M.
2011-10-01
Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.
Attitude control system of the Delfi-n3Xt satellite
NASA Astrophysics Data System (ADS)
Reijneveld, J.; Choukroun, D.
2013-12-01
This work is concerned with the development of the attitude control algorithms that will be implemented on board of the Delfi-n3xt nanosatellite, which is to be launched in 2013. One of the mission objectives is to demonstrate Sun pointing and three axis stabilization. The attitude control modes and the associated algorithms are described. The control authority is shared between three body-mounted magnetorquers (MTQ) and three orthogonal reaction wheels. The attitude information is retrieved from Sun vector measurements, Earth magnetic field measurements, and gyro measurements. The design of the control is achieved as a trade between simplicity and performance. Stabilization and Sun pointing are achieved via the successive application of the classical Bdot control law and a quaternion feedback control. For the purpose of Sun pointing, a simple quaternion estimation scheme is implemented based on geometric arguments, where the need for a costly optimal filtering algorithm is alleviated, and a single line of sight (LoS) measurement is required - here the Sun vector. Beyond the three-axis Sun pointing mode, spinning Sun pointing modes are also described and used as demonstration modes. The three-axis Sun pointing mode requires reaction wheels and magnetic control while the spinning control modes are implemented with magnetic control only. In addition, a simple scheme for angular rates estimation using Sun vector and Earth magnetic measurements is tested in the case of gyro failures. The various control modes performances are illustrated via extensive simulations over several orbits time spans. The simulated models of the dynamical space environment, of the attitude hardware, and the onboard controller logic are using realistic assumptions. All control modes satisfy the minimal Sun pointing requirements allowed for power generation.
van der Lee, J H; Svrcek, W Y; Young, B R
2008-01-01
Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.
Szlosek, Donald A; Ferrett, Jonathan
2016-01-01
As the number of clinical decision support systems (CDSSs) incorporated into electronic medical records (EMRs) increases, so does the need to evaluate their effectiveness. The use of medical record review and similar manual methods for evaluating decision rules is laborious and inefficient. The authors use machine learning and Natural Language Processing (NLP) algorithms to accurately evaluate a clinical decision support rule through an EMR system, and they compare it against manual evaluation. Modeled after the EMR system EPIC at Maine Medical Center, we developed a dummy data set containing physician notes in free text for 3,621 artificial patients records undergoing a head computed tomography (CT) scan for mild traumatic brain injury after the incorporation of an electronic best practice approach. We validated the accuracy of the Best Practice Advisories (BPA) using three machine learning algorithms-C-Support Vector Classification (SVC), Decision Tree Classifier (DecisionTreeClassifier), k-nearest neighbors classifier (KNeighborsClassifier)-by comparing their accuracy for adjudicating the occurrence of a mild traumatic brain injury against manual review. We then used the best of the three algorithms to evaluate the effectiveness of the BPA, and we compared the algorithm's evaluation of the BPA to that of manual review. The electronic best practice approach was found to have a sensitivity of 98.8 percent (96.83-100.0), specificity of 10.3 percent, PPV = 7.3 percent, and NPV = 99.2 percent when reviewed manually by abstractors. Though all the machine learning algorithms were observed to have a high level of prediction, the SVC displayed the highest with a sensitivity 93.33 percent (92.49-98.84), specificity of 97.62 percent (96.53-98.38), PPV = 50.00, NPV = 99.83. The SVC algorithm was observed to have a sensitivity of 97.9 percent (94.7-99.86), specificity 10.30 percent, PPV 7.25 percent, and NPV 99.2 percent for evaluating the best practice approach, after accounting for 17 cases (0.66 percent) where the patient records had to be reviewed manually due to the NPL systems inability to capture the proper diagnosis. CDSSs incorporated into EMRs can be evaluated in an automatic fashion by using NLP and machine learning techniques.
Active control for stabilization of neoclassical tearing modesa)
NASA Astrophysics Data System (ADS)
Humphreys, D. A.; Ferron, J. R.; La Haye, R. J.; Luce, T. C.; Petty, C. C.; Prater, R.; Welander, A. S.
2006-05-01
This work describes active control algorithms used by DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] to stabilize and maintain suppression of 3/2 or 2/1 neoclassical tearing modes (NTMs) by application of electron cyclotron current drive (ECCD) at the rational q surface. The DIII-D NTM control system can determine the correct q-surface/ECCD alignment and stabilize existing modes within 100-500ms of activation, or prevent mode growth with preemptive application of ECCD, in both cases enabling stable operation at normalized beta values above 3.5. Because NTMs can limit performance or cause plasma-terminating disruptions in tokamaks, their stabilization is essential to the high performance operation of ITER [R. Aymar et al., ITER Joint Central Team, ITER Home Teams, Nucl. Fusion 41, 1301 (2001)]. The DIII-D NTM control system has demonstrated many elements of an eventual ITER solution, including general algorithms for robust detection of q-surface/ECCD alignment and for real-time maintenance of alignment following the disappearance of the mode. This latter capability, unique to DIII-D, is based on real-time reconstruction of q-surface geometry by a Grad-Shafranov solver using external magnetics and internal motional Stark effect measurements. Alignment is achieved by varying either the plasma major radius (and the rational q surface) or the toroidal field (and the deposition location). The requirement to achieve and maintain q-surface/ECCD alignment with accuracy on the order of 1cm is routinely met by the DIII-D Plasma Control System and these algorithms. We discuss the integrated plasma control design process used for developing these and other general control algorithms, which includes physics-based modeling and testing of the algorithm implementation against simulations of actuator and plasma responses. This systematic design/test method and modeling environment enabled successful mode suppression by the NTM control system upon first-time use in an experimental discharge.
Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli
2014-08-01
Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies. Copyright © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
Dehghani Soufi, Mahsa; Samad-Soltani, Taha; Shams Vahdati, Samad; Rezaei-Hachesu, Peyman
2018-06-01
Fast and accurate patient triage for the response process is a critical first step in emergency situations. This process is often performed using a paper-based mode, which intensifies workload and difficulty, wastes time, and is at risk of human errors. This study aims to design and evaluate a decision support system (DSS) to determine the triage level. A combination of the Rule-Based Reasoning (RBR) and Fuzzy Logic Classifier (FLC) approaches were used to predict the triage level of patients according to the triage specialist's opinions and Emergency Severity Index (ESI) guidelines. RBR was applied for modeling the first to fourth decision points of the ESI algorithm. The data relating to vital signs were used as input variables and modeled using fuzzy logic. Narrative knowledge was converted to If-Then rules using XML. The extracted rules were then used to create the rule-based engine and predict the triage levels. Fourteen RBR and 27 fuzzy rules were extracted and used in the rule-based engine. The performance of the system was evaluated using three methods with real triage data. The accuracy of the clinical decision support systems (CDSSs; in the test data) was 99.44%. The evaluation of the error rate revealed that, when using the traditional method, 13.4% of the patients were miss-triaged, which is statically significant. The completeness of the documentation also improved from 76.72% to 98.5%. Designed system was effective in determining the triage level of patients and it proved helpful for nurses as they made decisions, generated nursing diagnoses based on triage guidelines. The hybrid approach can reduce triage misdiagnosis in a highly accurate manner and improve the triage outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.
De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-03-01
Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.
AdaBoost-based algorithm for network intrusion detection.
Hu, Weiming; Hu, Wei; Maybank, Steve
2008-04-01
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.
Development and validation of an algorithm for laser application in wound treatment 1
da Cunha, Diequison Rite; Salomé, Geraldo Magela; Massahud, Marcelo Renato; Mendes, Bruno; Ferreira, Lydia Masako
2017-01-01
ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05). Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962). Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds. PMID:29211197
Chen, Shu-Wen; Hutchinson, Alison M; Nagle, Cate; Bucknall, Tracey K
2018-01-17
Vaginal birth after caesarean (VBAC) is an alternative option for women who have had a previous caesarean section (CS); however, uptake is limited because of concern about the risks of uterine rupture. The aim of this study was to explore women's decision-making processes and the influences on their mode of birth following a previous CS. A qualitative approach was used. The research comprised three stages. Stage I consisted of naturalistic observation at 33-34 weeks' gestation. Stage II involved interviews with pregnant women at 35-37 weeks' gestation. Stage III consisted of interviews with the same women who were interviewed postnatally, 1 month after birth. The research was conducted in a private medical centre in northern Taiwan. Using a purposive sampling, 21 women and 9 obstetricians were recruited. Data collection involved in-depth interviews, observation and field notes. Constant comparative analysis was employed for data analysis. Ensuring the safety of mother and baby was the focus of women's decisions. Women's decisions-making influences included previous birth experience, concern about the risks of vaginal birth, evaluation of mode of birth, current pregnancy situation, information resources and health insurance. In communicating with obstetricians, some women complied with obstetricians' recommendations for repeat caesarean section (RCS) without being informed of alternatives. Others used four step decision-making processes that included searching for information, listening to obstetricians' professional judgement, evaluating alternatives, and making a decision regarding mode of birth. After birth, women reflected on their decisions in three aspects: reflection on birth choices; reflection on factors influencing decisions; and reflection on outcomes of decisions. The health and wellbeing of mother and baby were the major concerns for women. In response to the decision-making influences, women's interactions with obstetricians regarding birth choices varied from passive decision-making to shared decision-making. All women have the right to be informed of alternative birthing options. Routine provision of explanations by obstetricians regarding risks associated with alternative birth options, in addition to financial coverage for RCS from National Health Insurance, would assist women's decision-making. Establishment of a website to provide women with reliable information about birthing options may also assist women's decision-making.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Two modular neuro-fuzzy system for mobile robot navigation
NASA Astrophysics Data System (ADS)
Bobyr, M. V.; Titov, V. S.; Kulabukhov, S. A.; Syryamkin, V. I.
2018-05-01
The article considers the fuzzy model for navigation of a mobile robot operating in two modes. In the first mode the mobile robot moves along a line. In the second mode, the mobile robot looks for an target in unknown space. Structural and schematic circuit of four-wheels mobile robot are presented in the article. The article describes the movement of a mobile robot based on two modular neuro-fuzzy system. The algorithm of neuro-fuzzy inference used in two modular control system for movement of a mobile robot is given in the article. The experimental model of the mobile robot and the simulation of the neuro-fuzzy algorithm used for its control are presented in the article.
1990-05-01
1988) or ACI 318-83 (1983). Actual calculations for section strength are made using subroutines taken from the CASE program CSTR (Hamby and Price...validity of the design of their par- ticular structure. Thus, it is essential that the user of the program under- stand the design algorithm included...modes. However, several restrictions were placed on the design mode to avoid unnecessary com- plications of the design algorithm for cases rarely
NASA Technical Reports Server (NTRS)
Chen, D. Y.; Owen, H. A., Jr.; Wilson, T. G.
1980-01-01
This paper presents an algorithm and equations for designing the energy-storage reactor for dc-to-dc converters which are constrained to operate in the discontinuous-reactor-current mode. This design procedure applied to the three widely used single-winding configurations: the voltage step-up, the current step-up, and the voltage-or-current step-up converters. A numerical design example is given to illustrate the use of the design algorithm and design equations.
Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods
NASA Astrophysics Data System (ADS)
Koreň, Milan; Mokroš, Martin; Bucha, Tomáš
2017-12-01
This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
ERIC Educational Resources Information Center
Klapproth, Florian
2015-01-01
Two objectives guided this research. First, this study examined how well teachers' tracking decisions contribute to the homogenization of their students' achievements. Second, the study explored whether teachers' tracking decisions would be outperformed in homogenizing the students' achievements by statistical models of tracking decisions. These…
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Quantum metrology with a transmon qutrit
NASA Astrophysics Data System (ADS)
Shlyakhov, A. R.; Zemlyanov, V. V.; Suslov, M. V.; Lebedev, A. V.; Paraoanu, G. S.; Lesovik, G. B.; Blatter, G.
2018-02-01
Making use of coherence and entanglement as metrological quantum resources allows us to improve the measurement precision from the shot-noise or quantum limit to the Heisenberg limit. Quantum metrology then relies on the availability of quantum engineered systems that involve controllable quantum degrees of freedom which are sensitive to the measured quantity. Sensors operating in the qubit mode and exploiting their coherence in a phase-sensitive measurement have been shown to approach the Heisenberg scaling in precision. Here, we show that this result can be further improved by operating the quantum sensor in the qudit mode, i.e., by exploiting d rather than two levels. Specifically, we describe the metrological algorithm for using a superconducting transmon device operating in a qutrit mode as a magnetometer. The algorithm is based on the base-3 semiquantum Fourier transformation and enhances the quantum theoretical performance of the sensor by a factor of 2. Even more, the practical gain of our qutrit implementation is found in a reduction of the number of iteration steps of the quantum Fourier transformation by the factor ln(2 )/ln(3 )≈0.63 compared to the qubit mode. We show that a two-tone capacitively coupled radio-frequency signal is sufficient for implementation of the algorithm.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
Vlsi implementation of flexible architecture for decision tree classification in data mining
NASA Astrophysics Data System (ADS)
Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak
2017-07-01
The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.
Using multicriteria decision analysis during drug development to predict reimbursement decisions.
Williams, Paul; Mauskopf, Josephine; Lebiecki, Jake; Kilburg, Anne
2014-01-01
Pharmaceutical companies design clinical development programs to generate the data that they believe will support reimbursement for the experimental compound. The objective of the study was to present a process for using multicriteria decision analysis (MCDA) by a pharmaceutical company to estimate the probability of a positive recommendation for reimbursement for a new drug given drug and environmental attributes. The MCDA process included 1) selection of decisions makers who were representative of those making reimbursement decisions in a specific country; 2) two pre-workshop questionnaires to identify the most important attributes and their relative importance for a positive recommendation for a new drug; 3) a 1-day workshop during which participants undertook three tasks: i) they agreed on a final list of decision attributes and their importance weights, ii) they developed level descriptions for these attributes and mapped each attribute level to a value function, and iii) they developed profiles for hypothetical products 'just likely to be reimbursed'; and 4) use of the data from the workshop to develop a prediction algorithm based on a logistic regression analysis. The MCDA process is illustrated using case studies for three countries, the United Kingdom, Germany, and Spain. The extent to which the prediction algorithms for each country captured the decision processes for the workshop participants in our case studies was tested using a post-meeting questionnaire that asked the participants to make recommendations for a set of hypothetical products. The data collected in the case study workshops resulted in a prediction algorithm: 1) for the United Kingdom, the probability of a positive recommendation for different ranges of cost-effectiveness ratios; 2) for Spain, the probability of a positive recommendation at the national and regional levels; and 3) for Germany, the probability of a determination of clinical benefit. The results from the post-meeting questionnaire revealed a high predictive value for the algorithm developed using MCDA. Prediction algorithms developed using MCDA could be used by pharmaceutical companies when designing their clinical development programs to estimate the likelihood of a favourable reimbursement recommendation for different product profiles and for different positions in the treatment pathway.
Using multicriteria decision analysis during drug development to predict reimbursement decisions
Williams, Paul; Mauskopf, Josephine; Lebiecki, Jake; Kilburg, Anne
2014-01-01
Background Pharmaceutical companies design clinical development programs to generate the data that they believe will support reimbursement for the experimental compound. Objective The objective of the study was to present a process for using multicriteria decision analysis (MCDA) by a pharmaceutical company to estimate the probability of a positive recommendation for reimbursement for a new drug given drug and environmental attributes. Methods The MCDA process included 1) selection of decisions makers who were representative of those making reimbursement decisions in a specific country; 2) two pre-workshop questionnaires to identify the most important attributes and their relative importance for a positive recommendation for a new drug; 3) a 1-day workshop during which participants undertook three tasks: i) they agreed on a final list of decision attributes and their importance weights, ii) they developed level descriptions for these attributes and mapped each attribute level to a value function, and iii) they developed profiles for hypothetical products ‘just likely to be reimbursed’; and 4) use of the data from the workshop to develop a prediction algorithm based on a logistic regression analysis. The MCDA process is illustrated using case studies for three countries, the United Kingdom, Germany, and Spain. The extent to which the prediction algorithms for each country captured the decision processes for the workshop participants in our case studies was tested using a post-meeting questionnaire that asked the participants to make recommendations for a set of hypothetical products. Results The data collected in the case study workshops resulted in a prediction algorithm: 1) for the United Kingdom, the probability of a positive recommendation for different ranges of cost-effectiveness ratios; 2) for Spain, the probability of a positive recommendation at the national and regional levels; and 3) for Germany, the probability of a determination of clinical benefit. The results from the post-meeting questionnaire revealed a high predictive value for the algorithm developed using MCDA. Conclusions Prediction algorithms developed using MCDA could be used by pharmaceutical companies when designing their clinical development programs to estimate the likelihood of a favourable reimbursement recommendation for different product profiles and for different positions in the treatment pathway.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2018-05-01
The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.
Sliding-mode control of single input multiple output DC-DC converter
NASA Astrophysics Data System (ADS)
Zhang, Libo; Sun, Yihan; Luo, Tiejian; Wan, Qiyang
2016-10-01
Various voltage levels are required in the vehicle mounted power system. A conventional solution is to utilize an independent multiple output DC-DC converter whose cost is high and control scheme is complicated. In this paper, we design a novel SIMO DC-DC converter with sliding mode controller. The proposed converter can boost the voltage of a low-voltage input power source to a controllable high-voltage DC bus and middle-voltage output terminals, which endow the converter with characteristics of simple structure, low cost, and convenient control. In addition, the sliding mode control (SMC) technique applied in our converter can enhance the performances of a certain SIMO DC-DC converter topology. The high-voltage DC bus can be regarded as the main power source to the high-voltage facility of the vehicle mounted power system, and the middle-voltage output terminals can supply power to the low-voltage equipment on an automobile. In the respect of control algorithm, it is the first time to propose the SMC-PID (Proportion Integration Differentiation) control algorithm, in which the SMC algorithm is utilized and the PID control is attended to the conventional SMC algorithm. The PID control increases the dynamic ability of the SMC algorithm by establishing the corresponding SMC surface and introducing the attached integral of voltage error, which endow the sliding-control system with excellent dynamic performance. At last, we established the MATLAB/SIMULINK simulation model, tested performance of the system, and built the hardware prototype based on Digital Signal Processor (DSP). Results show that the sliding mode control is able to track a required trajectory, which has robustness against the uncertainties and disturbances.
Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations
NASA Astrophysics Data System (ADS)
Merckelbach, Lucas
2016-12-01
Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.
Evaluation of the operational SAR based Baltic sea ice concentration products
NASA Astrophysics Data System (ADS)
Karvonen, Juha
Sea ice concentration is an important ice parameter both for weather and climate modeling and sea ice navigation. We have developed an fully automated algorithm for sea ice concentration retrieval using dual-polarized ScanSAR wide mode RADARSAT-2 data. RADARSAT-2 is a C-band SAR instrument enabling dual-polarized acquisition in ScanSAR mode. The swath width for the RADARSAT-2 ScanSAR mode is about 500 km, making it very suitable for operational sea ice monitoring. The polarization combination used in our concentration estimation is HH/HV. The SAR data is first preprocessed, the preprocessing consists of geo-rectification to Mercator projection, incidence angle correction fro both the polarization channels. and SAR mosaicking. After preprocessing a segmentation is performed for the SAR mosaics, and some single-channel and dual-channel features are computed for each SAR segment. Finally the SAR concentration is estimated based on these segment-wise features. The algorithm is similar as introduced in Karvonen 2014. The ice concentration is computed daily using a daily RADARSAT-2 SAR mosaic as its input, and it thus gives the concentration estimated at each Baltic Sea location based on the most recent SAR data at the location. The algorithm has been run in an operational test mode since January 2014. We present evaluation of the SAR-based concentration estimates for the Baltic ice season 2014 by comparing the SAR results with gridded the Finnish Ice Service ice charts and ice concentration estimates from a radiometer algorithm (AMSR-2 Bootstrap algorithm results). References: J. Karvonen, Baltic Sea Ice Concentration Estimation Based on C-Band Dual-Polarized SAR Data, IEEE Transactions on Geoscience and Remote Sensing, in press, DOI: 10.1109/TGRS.2013.2290331, 2014.
Sliding-mode control of single input multiple output DC-DC converter.
Zhang, Libo; Sun, Yihan; Luo, Tiejian; Wan, Qiyang
2016-10-01
Various voltage levels are required in the vehicle mounted power system. A conventional solution is to utilize an independent multiple output DC-DC converter whose cost is high and control scheme is complicated. In this paper, we design a novel SIMO DC-DC converter with sliding mode controller. The proposed converter can boost the voltage of a low-voltage input power source to a controllable high-voltage DC bus and middle-voltage output terminals, which endow the converter with characteristics of simple structure, low cost, and convenient control. In addition, the sliding mode control (SMC) technique applied in our converter can enhance the performances of a certain SIMO DC-DC converter topology. The high-voltage DC bus can be regarded as the main power source to the high-voltage facility of the vehicle mounted power system, and the middle-voltage output terminals can supply power to the low-voltage equipment on an automobile. In the respect of control algorithm, it is the first time to propose the SMC-PID (Proportion Integration Differentiation) control algorithm, in which the SMC algorithm is utilized and the PID control is attended to the conventional SMC algorithm. The PID control increases the dynamic ability of the SMC algorithm by establishing the corresponding SMC surface and introducing the attached integral of voltage error, which endow the sliding-control system with excellent dynamic performance. At last, we established the MATLAB/SIMULINK simulation model, tested performance of the system, and built the hardware prototype based on Digital Signal Processor (DSP). Results show that the sliding mode control is able to track a required trajectory, which has robustness against the uncertainties and disturbances.
ERIC Educational Resources Information Center
Prinsloo, Paul
2017-01-01
In the socio-technical imaginary of higher education, algorithmic decision-making offers huge potential, but we also cannot deny the risks and ethical concerns. In fleeing from Frankenstein's monster, there is a real possibility that we will meet Kafka on our path, and not find our way out of the maze of ethical considerations in the nexus between…
Analysis of stock investment selection based on CAPM using covariance and genetic algorithm approach
NASA Astrophysics Data System (ADS)
Sukono; Susanti, D.; Najmia, M.; Lesmana, E.; Napitupulu, H.; Supian, S.; Putra, A. S.
2018-03-01
Investment is one of the economic growth factors of countries, especially in Indonesia. Stocks is a form of investment, which is liquid. In determining the stock investment decisions which need to be considered by investors is to choose stocks that can generate maximum returns with a minimum risk level. Therefore, we need to know how to allocate the capital which may give the optimal benefit. This study discusses the issue of stock investment based on CAPM which is estimated using covariance and Genetic Algorithm approach. It is assumed that the stocks analyzed follow the CAPM model. To do the estimation of beta parameter on CAPM equation is done by two approach, first is to be represented by covariance approach, and second with genetic algorithm optimization. As a numerical illustration, in this paper analyzed ten stocks traded on the capital market in Indonesia. The results of the analysis show that estimation of beta parameters using covariance and genetic algorithm approach, give the same decision, that is, six underpriced stocks with buying decision, and four overpriced stocks with a sales decision. Based on the analysis, it can be concluded that the results can be used as a consideration for investors buying six under-priced stocks, and selling four overpriced stocks.
Smart algorithms and adaptive methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Tinsley Oden, J.
1989-05-01
A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.
Tayefi, Maryam; Tajfard, Mohammad; Saffar, Sara; Hanachi, Parichehr; Amirabadizadeh, Ali Reza; Esmaeily, Habibollah; Taghipour, Ali; Ferns, Gordon A; Moohebati, Mohsen; Ghayour-Mobarhan, Majid
2017-04-01
Coronary heart disease (CHD) is an important public health problem globally. Algorithms incorporating the assessment of clinical biomarkers together with several established traditional risk factors can help clinicians to predict CHD and support clinical decision making with respect to interventions. Decision tree (DT) is a data mining model for extracting hidden knowledge from large databases. We aimed to establish a predictive model for coronary heart disease using a decision tree algorithm. Here we used a dataset of 2346 individuals including 1159 healthy participants and 1187 participant who had undergone coronary angiography (405 participants with negative angiography and 782 participants with positive angiography). We entered 10 variables of a total 12 variables into the DT algorithm (including age, sex, FBG, TG, hs-CRP, TC, HDL, LDL, SBP and DBP). Our model could identify the associated risk factors of CHD with sensitivity, specificity, accuracy of 96%, 87%, 94% and respectively. Serum hs-CRP levels was at top of the tree in our model, following by FBG, gender and age. Our model appears to be an accurate, specific and sensitive model for identifying the presence of CHD, but will require validation in prospective studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Khalkhali, Hamid Reza; Lotfnezhad Afshar, Hadi; Esnaashari, Omid; Jabbari, Nasrollah
2016-01-01
Breast cancer survival has been analyzed by many standard data mining algorithms. A group of these algorithms belonged to the decision tree category. Ability of the decision tree algorithms in terms of visualizing and formulating of hidden patterns among study variables were main reasons to apply an algorithm from the decision tree category in the current study that has not studied already. The classification and regression trees (CART) was applied to a breast cancer database contained information on 569 patients in 2007-2010. The measurement of Gini impurity used for categorical target variables was utilized. The classification error that is a function of tree size was measured by 10-fold cross-validation experiments. The performance of created model was evaluated by the criteria as accuracy, sensitivity and specificity. The CART model produced a decision tree with 17 nodes, 9 of which were associated with a set of rules. The rules were meaningful clinically. They showed in the if-then format that Stage was the most important variable for predicting breast cancer survival. The scores of accuracy, sensitivity and specificity were: 80.3%, 93.5% and 53%, respectively. The current study model as the first one created by the CART was able to extract useful hidden rules from a relatively small size dataset.
A new algorithm for construction of coarse-grained sites of large biomolecules.
Li, Min; Zhang, John Z H; Xia, Fei
2016-04-05
The development of coarse-grained (CG) models for large biomolecules remains a challenge in multiscale simulations, including a rigorous definition of CG representations for them. In this work, we proposed a new stepwise optimization imposed with the boundary-constraint (SOBC) algorithm to construct the CG sites of large biomolecules, based on the s cheme of essential dynamics CG. By means of SOBC, we can rigorously derive the CG representations of biomolecules with less computational cost. The SOBC is particularly efficient for the CG definition of large systems with thousands of residues. The resulted CG sites can be parameterized as a CG model using the normal mode analysis based fluctuation matching method. Through normal mode analysis, the obtained modes of CG model can accurately reflect the functionally related slow motions of biomolecules. The SOBC algorithm can be used for the construction of CG sites of large biomolecules such as F-actin and for the study of mechanical properties of biomaterials. © 2015 Wiley Periodicals, Inc.
A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders
NASA Astrophysics Data System (ADS)
Harris, T. J.; Pederick, L. H.
2017-12-01
The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.
Step Detection Robust against the Dynamics of Smartphones
Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin
2015-01-01
A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857
Banerjee, Saswatee; Hoshino, Tetsuya; Cole, James B
2008-08-01
We introduce a new implementation of the finite-difference time-domain (FDTD) algorithm with recursive convolution (RC) for first-order Drude metals. We implemented RC for both Maxwell's equations for light polarized in the plane of incidence (TM mode) and the wave equation for light polarized normal to the plane of incidence (TE mode). We computed the Drude parameters at each wavelength using the measured value of the dielectric constant as a function of the spatial and temporal discretization to ensure both the accuracy of the material model and algorithm stability. For the TE mode, where Maxwell's equations reduce to the wave equation (even in a region of nonuniform permittivity) we introduced a wave equation formulation of RC-FDTD. This greatly reduces the computational cost. We used our methods to compute the diffraction characteristics of metallic gratings in the visible wavelength band and compared our results with frequency-domain calculations.
NASA Astrophysics Data System (ADS)
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
Algorithm for designing smart factory Industry 4.0
NASA Astrophysics Data System (ADS)
Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-01-01
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments. PMID:27104539
NASA Astrophysics Data System (ADS)
Leonard, Kevin Raymond
This dissertation concentrates on the development of two new tomographic techniques that enable wide-area inspection of pipe-like structures. By envisioning a pipe as a plate wrapped around upon itself, the previous Lamb Wave Tomography (LWT) techniques are adapted to cylindrical structures. Helical Ultrasound Tomography (HUT) uses Lamb-like guided wave modes transmitted and received by two circumferential arrays in a single crosshole geometry. Meridional Ultrasound Tomography (MUT) creates the same crosshole geometry with a linear array of transducers along the axis of the cylinder. However, even though these new scanning geometries are similar to plates, additional complexities arise because they are cylindrical structures. First, because it is a single crosshole geometry, the wave vector coverage is poorer than in the full LWT system. Second, since waves can travel in both directions around the circumference of the pipe, modes can also constructively and destructively interfere with each other. These complexities necessitate improved signal processing algorithms to produce accurate and unambiguous tomographic reconstructions. Consequently, this work also describes a new algorithm for improving the extraction of multi-mode arrivals from guided wave signals. Previous work has relied solely on the first arriving mode for the time-of-flight measurements. In order to improve the LWT, HUT and MUT systems reconstructions, improved signal processing methods are needed to extract information about the arrival times of the later arriving modes. Because each mode has different through-thickness displacement values, they are sensitive to different types of flaws, and the information gained from the multi-mode analysis improves understanding of the structural integrity of the inspected material. Both tomographic frequency compounding and mode sorting algorithms are introduced. It is also shown that each of these methods improve the reconstructed images both qualitatively and quantitatively.
Algorithmic Case Pedagogy, Learning and Gender
ERIC Educational Resources Information Center
Bromley, Robert; Huang, Zhenyu
2015-01-01
Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…
NASA Astrophysics Data System (ADS)
Zaiwani, B. E.; Zarlis, M.; Efendi, S.
2018-03-01
In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.
A novel clinical decision support algorithm for constructing complete medication histories.
Long, Ju; Yuan, Michael Juntao
2017-07-01
A patient's complete medication history is a crucial element for physicians to develop a full understanding of the patient's medical conditions and treatment options. However, due to the fragmented nature of medical data, this process can be very time-consuming and often impossible for physicians to construct a complete medication history for complex patients. In this paper, we describe an accurate, computationally efficient and scalable algorithm to construct a medication history timeline. The algorithm is developed and validated based on 1 million random prescription records from a large national prescription data aggregator. Our evaluation shows that the algorithm can be scaled horizontally on-demand, making it suitable for future delivery in a cloud-computing environment. We also propose that this cloud-based medication history computation algorithm could be integrated into Electronic Medical Records, enabling informed clinical decision-making at the point of care. Copyright © 2017 Elsevier B.V. All rights reserved.
A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network
NASA Astrophysics Data System (ADS)
Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.
Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.
A novel association rule mining approach using TID intermediate itemset.
Aqra, Iyad; Herawan, Tutut; Abdul Ghani, Norjihan; Akhunzada, Adnan; Ali, Akhtar; Bin Razali, Ramdan; Ilahi, Manzoor; Raymond Choo, Kim-Kwang
2018-01-01
Designing an efficient association rule mining (ARM) algorithm for multilevel knowledge-based transactional databases that is appropriate for real-world deployments is of paramount concern. However, dynamic decision making that needs to modify the threshold either to minimize or maximize the output knowledge certainly necessitates the extant state-of-the-art algorithms to rescan the entire database. Subsequently, the process incurs heavy computation cost and is not feasible for real-time applications. The paper addresses efficiently the problem of threshold dynamic updation for a given purpose. The paper contributes by presenting a novel ARM approach that creates an intermediate itemset and applies a threshold to extract categorical frequent itemsets with diverse threshold values. Thus, improving the overall efficiency as we no longer needs to scan the whole database. After the entire itemset is built, we are able to obtain real support without the need of rebuilding the itemset (e.g. Itemset list is intersected to obtain the actual support). Moreover, the algorithm supports to extract many frequent itemsets according to a pre-determined minimum support with an independent purpose. Additionally, the experimental results of our proposed approach demonstrate the capability to be deployed in any mining system in a fully parallel mode; consequently, increasing the efficiency of the real-time association rules discovery process. The proposed approach outperforms the extant state-of-the-art and shows promising results that reduce computation cost, increase accuracy, and produce all possible itemsets.
A novel association rule mining approach using TID intermediate itemset
Ali, Akhtar; Bin Razali, Ramdan; Ilahi, Manzoor; Raymond Choo, Kim-Kwang
2018-01-01
Designing an efficient association rule mining (ARM) algorithm for multilevel knowledge-based transactional databases that is appropriate for real-world deployments is of paramount concern. However, dynamic decision making that needs to modify the threshold either to minimize or maximize the output knowledge certainly necessitates the extant state-of-the-art algorithms to rescan the entire database. Subsequently, the process incurs heavy computation cost and is not feasible for real-time applications. The paper addresses efficiently the problem of threshold dynamic updation for a given purpose. The paper contributes by presenting a novel ARM approach that creates an intermediate itemset and applies a threshold to extract categorical frequent itemsets with diverse threshold values. Thus, improving the overall efficiency as we no longer needs to scan the whole database. After the entire itemset is built, we are able to obtain real support without the need of rebuilding the itemset (e.g. Itemset list is intersected to obtain the actual support). Moreover, the algorithm supports to extract many frequent itemsets according to a pre-determined minimum support with an independent purpose. Additionally, the experimental results of our proposed approach demonstrate the capability to be deployed in any mining system in a fully parallel mode; consequently, increasing the efficiency of the real-time association rules discovery process. The proposed approach outperforms the extant state-of-the-art and shows promising results that reduce computation cost, increase accuracy, and produce all possible itemsets. PMID:29351287
NASA Astrophysics Data System (ADS)
Gou, Pengqi; Wang, Kaihui; Qin, Chaoyi; Yu, Jianjun
2017-03-01
We experimentally demonstrate a 16-ary quadrature amplitude modulation (16QAM) DFT-spread optical orthogonal frequency division multiplexing (OFDM) transmission system utilizing a cost-effective directly modulated laser (DML) and direct detection. For 20-Gbaud 16QAM-OFDM signal, with the aid of nonlinear equalization (NLE) algorithm, we respectively provide 6.2-dB and 5.2-dB receiver sensitivity improvement under the hard-decision forward-error-correction (HD-FEC) threshold of 3.8×10-3 for the back-to-back (BTB) case and after transmission over 10-km standard single mode fiber (SSMF) case, related to only adopt post-equalization scheme. To our knowledge, this is the first time to use dynamic nonlinear equalizer (NLE) based on the summation of the square of the difference between samples in one IM/DD OFDM system with DML to mitigate nonlinear distortion.
Research on crude oil storage and transportation based on optimization algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xuhua
2018-04-01
At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.
Schöfer, Helmut; Tatti, Silvio; Lynde, Charles W; Skerlev, Mihael; Hercogová, Jana; Rotaru, Maria; Ballesteros, Juan; Calzavara-Pinton, Piergiacomo
2017-12-01
This review about the proactive sequential therapy (PST) of external genital and perianal warts (EGW) is based on the most current available clinical literature and on the broad clinical experience of a group of international experts, physicians who are well versed in the treatment of human papillomavirus-associated diseases. It provides a practical guide for the treatment of EGW, including epidemiology, etiology, clinical appearance, and diagnostic procedures for these viral infections. Furthermore, the treatment goals and current treatment options, elucidating provider- and patient-applied therapies, and the parameters driving treatment decisions are summarized. Specifically, the mode of action of the topical treatments sinecatechins and imiquimod, as well as the PST for EGW to achieve rapid and sustained clearance is discussed. The group of experts has developed a treatment algorithm giving healthcare providers a practical tool for the treatment of EGW which is very valuable in the presence of many different treatment options.
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey; Stueber, Thomas
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10-foot by 10-foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10- by 10-Foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
Deep learning and model predictive control for self-tuning mode-locked lasers
NASA Astrophysics Data System (ADS)
Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.
2018-03-01
Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.
Durham, Erin-Elizabeth A; Yu, Xiaxia; Harrison, Robert W
2014-12-01
Effective machine-learning handles large datasets efficiently. One key feature of handling large data is the use of databases such as MySQL. The freeware fuzzy decision tree induction tool, FDT, is a scalable supervised-classification software tool implementing fuzzy decision trees. It is based on an optimized fuzzy ID3 (FID3) algorithm. FDT 2.0 improves upon FDT 1.0 by bridging the gap between data science and data engineering: it combines a robust decisioning tool with data retention for future decisions, so that the tool does not need to be recalibrated from scratch every time a new decision is required. In this paper we briefly review the analytical capabilities of the freeware FDT tool and its major features and functionalities; examples of large biological datasets from HIV, microRNAs and sRNAs are included. This work shows how to integrate fuzzy decision algorithms with modern database technology. In addition, we show that integrating the fuzzy decision tree induction tool with database storage allows for optimal user satisfaction in today's Data Analytics world.
Interior Noise Reduction by Adaptive Feedback Vibration Control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1998-01-01
The objective of this project is to investigate the possible use of adaptive digital filtering techniques in simultaneous, multiple-mode identification of the modal parameters of a vibrating structure in real-time. It is intended that the results obtained from this project will be used for state estimation needed in adaptive structural acoustics control. The work done in this project is basically an extension of the work on real-time single mode identification, which was performed successfully using a digital signal processor (DSP) at NASA, Langley. Initially, in this investigation the single mode identification work was duplicated on a different processor, namely the Texas Instruments TMS32OC40 DSP. The system identification results for the single mode case were very good. Then an algorithm for simultaneous two mode identification was developed and tested using analytical simulation. When it successfully performed the expected tasks, it was implemented in real-time on the DSP system to identify the first two modes of vibration of a cantilever aluminum beam. The results of the simultaneous two mode case were good but some problems were identified related to frequency warping and spurious mode identification. The frequency warping problem was found to be due to the bilinear transformation used in the algorithm to convert the system transfer function from the continuous-time domain to the discrete-time domain. An alternative approach was developed to rectify the problem. The spurious mode identification problem was found to be associated with high sampling rates. Noise in the signal is suspected to be the cause of this problem but further investigation will be needed to clarify the cause. For simultaneous identification of more than two modes, it was found that theoretically an adaptive digital filter can be designed to identify the required number of modes, but the algebra became very complex which made it impossible to implement in the DSP system used in this study. The on-line identification algorithm developed in this research will be useful in constructing a state estimator for feedback vibration control.
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
Rothermundt, Christian; Bailey, Alexandra; Cerbone, Linda; Eisen, Tim; Escudier, Bernard; Gillessen, Silke; Grünwald, Viktor; Larkin, James; McDermott, David; Oldenburg, Jan; Porta, Camillo; Rini, Brian; Schmidinger, Manuela; Sternberg, Cora; Putora, Paul M
2015-09-01
With the advent of targeted therapies, many treatment options in the first-line setting of metastatic clear cell renal cell carcinoma (mccRCC) have emerged. Guidelines and randomized trial reports usually do not elucidate the decision criteria for the different treatment options. In order to extract the decision criteria for the optimal therapy for patients, we performed an analysis of treatment algorithms from experts in the field. Treatment algorithms for the treatment of mccRCC from experts of 11 institutions were obtained, and decision trees were deduced. Treatment options were identified and a list of unified decision criteria determined. The final decision trees were analyzed with a methodology based on diagnostic nodes, which allows for an automated cross-comparison of decision trees. The most common treatment recommendations were determined, and areas of discordance were identified. The analysis revealed heterogeneity in most clinical scenarios. The recommendations selected for first-line treatment of mccRCC included sunitinib, pazopanib, temsirolimus, interferon-α combined with bevacizumab, high-dose interleukin-2, sorafenib, axitinib, everolimus, and best supportive care. The criteria relevant for treatment decisions were performance status, Memorial Sloan Kettering Cancer Center risk group, only or mainly lung metastases, cardiac insufficiency, hepatic insufficiency, age, and "zugzwang" (composite of multiple, related criteria). In the present study, we used diagnostic nodes to compare treatment algorithms in the first-line treatment of mccRCC. The results illustrate the heterogeneity of the decision criteria and treatment strategies for mccRCC and how available data are interpreted and implemented differently among experts. The data provided in the present report should not be considered to serve as treatment recommendations for the management of treatment-naïve patients with multiple metastases from metastatic clear cell renal cell carcinoma outside a clinical trial; however, the data highlight the different treatment options and the criteria used to select them. The diversity in decision making and how results from phase III trials can be interpreted and implemented differently in daily practice are demonstrated. ©AlphaMed Press.
Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.
2013-01-01
Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
2017-10-01
hypothesis that a computer machine learning algorithm can analyze and classify burn injures using multispectral imaging within 5% of an expert clinician...morbidity. In response to these challenges, the USAISR developed and obtained FDA 510(k) clearance of the Burn Navigator™, a computer decision support... computer decision support software (CDSS), can significantly change the CDSS algorithm’s recommendations and thus the total fluid administered to a
Accelerated decomposition techniques for large discounted Markov decision processes
NASA Astrophysics Data System (ADS)
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control
NASA Technical Reports Server (NTRS)
Bernstein, Daniel S.; Zilberstein, Shlomo
2003-01-01
Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.
A Recommendation Algorithm for Automating Corollary Order Generation
Klann, Jeffrey; Schadow, Gunther; McCoy, JM
2009-01-01
Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
A recommendation algorithm for automating corollary order generation.
Klann, Jeffrey; Schadow, Gunther; McCoy, J M
2009-11-14
Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization
NASA Astrophysics Data System (ADS)
Liu, Zexi
2018-01-01
Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Ozçift, Akin
2011-05-01
Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qiang, Jiang; Meng-wei, Liao; Ming-jie, Luo
2018-03-01
Abstract.The control performance of Permanent Magnet Synchronous Motor will be affected by the fluctuation or changes of mechanical parameters when PMSM is applied as driving motor in actual electric vehicle,and external disturbance would influence control robustness.To improve control dynamic quality and robustness of PMSM speed control system, a new second order integral sliding mode control algorithm is introduced into PMSM vector control.The simulation results show that, compared with the traditional PID control,the modified control scheme optimized has better control precision and dynamic response ability and perform better with a stronger robustness facing external disturbance,it can effectively solve the traditional sliding mode variable structure control chattering problems as well.
Vibration suppression in flexible structures via the sliding-mode control approach
NASA Technical Reports Server (NTRS)
Drakunov, S.; Oezguener, Uemit
1994-01-01
Sliding mode control became very popular recently because it makes the closed loop system highly insensitive to external disturbances and parameter variations. Sliding algorithms for flexible structures have been used previously, but these were based on finite-dimensional models. An extension of this approach for differential-difference systems is obtained. That makes if possible to apply sliding-mode control algorithms to the variety of nondispersive flexible structures which can be described as differential-difference systems. The main idea of using this technique for dispersive structures is to reduce the order of the controlled part of the system by applying an integral transformation. We can say that transformation 'absorbs' the dispersive properties of the flexible structure as the controlled part becomes dispersive.
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
Image-driven Population Analysis through Mixture Modeling
Sabuncu, Mert R.; Balci, Serdar K.; Shenton, Martha E.; Golland, Polina
2009-01-01
We present iCluster, a fast and efficient algorithm that clusters a set of images while co-registering them using a parameterized, nonlinear transformation model. The output of the algorithm is a small number of template images that represent different modes in a population. This is in contrast with traditional, hypothesis-driven computational anatomy approaches that assume a single template to construct an atlas. We derive the algorithm based on a generative model of an image population as a mixture of deformable template images. We validate and explore our method in four experiments. In the first experiment, we use synthetic data to explore the behavior of the algorithm and inform a design choice on parameter settings. In the second experiment, we demonstrate the utility of having multiple atlases for the application of localizing temporal lobe brain structures in a pool of subjects that contains healthy controls and schizophrenia patients. Next, we employ iCluster to partition a data set of 415 whole brain MR volumes of subjects aged 18 through 96 years into three anatomical subgroups. Our analysis suggests that these subgroups mainly correspond to age groups. The templates reveal significant structural differences across these age groups that confirm previous findings in aging research. In the final experiment, we run iCluster on a group of 15 patients with dementia and 15 age-matched healthy controls. The algorithm produces two modes, one of which contains dementia patients only. These results suggest that the algorithm can be used to discover sub-populations that correspond to interesting structural or functional “modes.” PMID:19336293
NASA Astrophysics Data System (ADS)
Grubov, V. V.; Runnova, A. E.; Hramov, A. E.
2018-05-01
A new method for adaptive filtration of experimental EEG signals in humans and for removal of different physiological artifacts has been proposed. The algorithm of the method includes empirical mode decomposition of EEG, determination of the number of empirical modes that are considered, analysis of the empirical modes and search for modes that contains artifacts, removal of these modes, and reconstruction of the EEG signal. The method was tested on experimental human EEG signals and demonstrated high efficiency in the removal of different types of physiological EEG artifacts.
An Isometric Mapping Based Co-Location Decision Tree Algorithm
NASA Astrophysics Data System (ADS)
Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.
2018-05-01
Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
Lo, Benjamin W Y; Fukuda, Hitoshi; Angle, Mark; Teitelbaum, Jeanne; Macdonald, R Loch; Farrokhyar, Forough; Thabane, Lehana; Levine, Mitchell A H
2016-01-01
Classification and regression tree analysis involves the creation of a decision tree by recursive partitioning of a dataset into more homogeneous subgroups. Thus far, there is scarce literature on using this technique to create clinical prediction tools for aneurysmal subarachnoid hemorrhage (SAH). The classification and regression tree analysis technique was applied to the multicenter Tirilazad database (3551 patients) in order to create the decision-making algorithm. In order to elucidate prognostic subgroups in aneurysmal SAH, neurologic, systemic, and demographic factors were taken into account. The dependent variable used for analysis was the dichotomized Glasgow Outcome Score at 3 months. Classification and regression tree analysis revealed seven prognostic subgroups. Neurological grade, occurrence of post-admission stroke, occurrence of post-admission fever, and age represented the explanatory nodes of this decision tree. Split sample validation revealed classification accuracy of 79% for the training dataset and 77% for the testing dataset. In addition, the occurrence of fever at 1-week post-aneurysmal SAH is associated with increased odds of post-admission stroke (odds ratio: 1.83, 95% confidence interval: 1.56-2.45, P < 0.01). A clinically useful classification tree was generated, which serves as a prediction tool to guide bedside prognostication and clinical treatment decision making. This prognostic decision-making algorithm also shed light on the complex interactions between a number of risk factors in determining outcome after aneurysmal SAH.
Expanded envelope concepts for aircraft control-element failure detection and identification
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.; Hsu, John Y.
1988-01-01
The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.
Artificial Neural Network Approach in Laboratory Test Reporting: Learning Algorithms.
Demirci, Ferhat; Akan, Pinar; Kume, Tuncay; Sisman, Ali Riza; Erbayraktar, Zubeyde; Sevinc, Suleyman
2016-08-01
In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A
2015-01-01
Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
Sociality Mental Modes Modulate the Processing of Advice-Giving: An Event-Related Potentials Study
Li, Jin; Zhan, Youlong; Fan, Wei; Liu, Lei; Li, Mei; Sun, Yu; Zhong, Yiping
2018-01-01
People have different motivations to get along with others in different sociality mental modes (i.e., communal mode and market mode), which might affect social decision-making. The present study examined how these two types of sociality mental modes affect the processing of advice-giving using the event-related potentials (ERPs). After primed with the communal mode and market mode, participants were instructed to decide whether or not give an advice (profitable or damnous) to a stranger without any feedback. The behavioral results showed that participants preferred to give the profitable advice to the stranger more slowly compared with the damnous advice, but this difference was only observed in the market mode condition. The ERP results indicated that participants demonstrated more negative N1 amplitude for the damnous advice compared with the profitable advice, and larger P300 was elicited in the market mode relative to both the communal mode and the control group. More importantly, participants in the market mode demonstrated larger P300 for the profitable advice than the damnous advice, whereas this difference was not observed at the communal mode and the control group. These findings are consistent with the dual-process system during decision-making and suggest that market mode may lead to deliberate calculation for costs and benefits when giving the profitable advice to others. PMID:29467689
A Simulation Tool for Distributed Databases.
1981-09-01
11-8 . Reed’s multiversion system [RE1T8] may also be viewed aa updating only copies until the commit is made. The decision to make the changes...distributed voting, and Ellis’ ring algorithm. Other, significantly different algorithms not covered in his work include Reed’s multiversion algorithm, the
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... transactions. Transactions that originate from unrelated algorithms or separate and distinct trading strategies... transactions were undertaken for manipulative or other fraudulent purposes. Algorithms or trading strategies... activity and the use of algorithms by firms to make trading decisions, FINRA has observed an increase in...
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kain, Jaan-Henrik; Soederberg, Henriette
2008-01-15
The vision of sustainable development entails new and complex planning situations, confronting local policy makers with changing political conditions, different content in decision making and planning and new working methods. Moreover, the call for sustainable development has been a major driving force towards an increasingly multi-stakeholder planning system. This situation requires competence in working in, and managing, groups of actors, including not only experts and project owners but also other categories of stakeholders. Among other qualities, such competence requires a working strategy aimed at integrating various, and sometimes incommensurable, forms of knowledge to construct a relevant and valid knowledge basemore » prior to decision making. Consequently, there lies great potential in methods that facilitate the evaluation of strategies for infrastructural development across multiple knowledge areas, so-called multi-criteria decision aids (MCDAs). In the present article, observations from six case studies are discussed, where the common denominators are infrastructural planning, multi-stakeholder participation and the use of MCDAs as interactive decision support. Three MCDAs are discussed - NAIADE, SCA and STRAD - with an emphasis on how they function in their procedural context. Accordingly, this is not an analysis of MCDA algorithms, of software programming aspects or of MCDAs as context-independent 'decision machines'-the focus is on MCDAs as actor systems, not as expert systems. The analysis is carried out across four main themes: (a) symmetrical management of different forms of knowledge; (b) management of heterogeneity, pluralism and conflict; (c) functionality and ease of use; and (d) transparency and trust. It shows that STRAD, by far, seems to be the most useful MCDA in interactive settings. NAIADE and SCA are roughly equivalent but have their strengths and weaknesses in different areas. Moreover, it was found that some MCDA issues require further attention, i.e., regarding transparency and understandability; qualitative/quantitative knowledge input; switching between different modes of weighting; software flexibility; as well as graphic and user interfaces.« less
NASA Astrophysics Data System (ADS)
Erickson, Kyle J.; Ross, Timothy D.
2007-04-01
Decision-level fusion is an appealing extension to automatic/assisted target recognition (ATR) as it is a low-bandwidth technique bolstered by a strong theoretical foundation that requires no modification of the source algorithms. Despite the relative simplicity of decision-level fusion, there are many options for fusion application and fusion algorithm specifications. This paper describes a tool that allows trade studies and optimizations across these many options, by feeding an actual fusion algorithm via models of the system environment. Models and fusion algorithms can be specified and then exercised many times, with accumulated results used to compute performance metrics such as probability of correct identification. Performance differences between the best of the contributing sources and the fused result constitute examples of "gain." The tool, constructed as part of the Fusion for Identifying Targets Experiment (FITE) within the Air Force Research Laboratory (AFRL) Sensors Directorate ATR Thrust, finds its main use in examining the relationships among conditions affecting the target, prior information, fusion algorithm complexity, and fusion gain. ATR as an unsolved problem provides the main challenges to fusion in its high cost and relative scarcity of training data, its variability in application, the inability to produce truly random samples, and its sensitivity to context. This paper summarizes the mathematics underlying decision-level fusion in the ATR domain and describes a MATLAB-based architecture for exploring the trade space thus defined. Specific dimensions within this trade space are delineated, providing the raw material necessary to define experiments suitable for multi-look and multi-sensor ATR systems.
Modeling paradigms for medical diagnostic decision support: a survey and future directions.
Wagholikar, Kavishwar B; Sundararajan, Vijayraghavan; Deshpande, Ashok W
2012-10-01
Use of computer based decision tools to aid clinical decision making, has been a primary goal of research in biomedical informatics. Research in the last five decades has led to the development of Medical Decision Support (MDS) applications using a variety of modeling techniques, for a diverse range of medical decision problems. This paper surveys literature on modeling techniques for diagnostic decision support, with a focus on decision accuracy. Trends and shortcomings of research in this area are discussed and future directions are provided. The authors suggest that-(i) Improvement in the accuracy of MDS application may be possible by modeling of vague and temporal data, research on inference algorithms, integration of patient information from diverse sources and improvement in gene profiling algorithms; (ii) MDS research would be facilitated by public release of de-identified medical datasets, and development of opensource data-mining tool kits; (iii) Comparative evaluations of different modeling techniques are required to understand characteristics of the techniques, which can guide developers in choice of technique for a particular medical decision problem; and (iv) Evaluations of MDS applications in clinical setting are necessary to foster physicians' utilization of these decision aids.
Schoorel, E N C; Vankan, E; Scheepers, H C J; Augustijn, B C C; Dirksen, C D; de Koning, M; van Kuijk, S M J; Kwee, A; Melman, S; Nijhuis, J G; Aardenburg, R; de Boer, K; Hasaart, T H M; Mol, B W J; Nieuwenhuijze, M; van Pampus, M G; van Roosmalen, J; Roumen, F J M E; de Vries, R; Wouters, M G A J; van der Weijden, T; Hermens, R P M G
2014-01-01
To develop a patient decision aid (PtDA) for mode of delivery after caesarean section that integrates personalised prediction of vaginal birth after caesarean (VBAC) with the elicitation of patient preferences and evidence-based information. A PtDA was developed and pilot tested using the International Patients Decision Aid Standards (IPDAS) criteria. Obstetric health care in the Netherlands. A multidisciplinary steering group, an expert panel, and 25 future users of the PtDA, i.e. women with a previous caesarean section. The development consisted of a construction phase (definition of scope and purpose, and selection of content, framework, and format) and a pilot testing phase by interview. The process was supervised by a multidisciplinary steering group. Usability, clarity, and relevance. The construction phase resulted in a booklet including unbiased balanced information on mode of birth after caesarean section, a preference elicitation exercise, and tailored risk information, including a prediction model for successful VBAC. During pilot testing, visualisation of risks and clarity formed the main basis for revisions. Pilot testing showed the availability of tailored structured information to be the main factor involving women in decision-making. The PtDA meets 39 out of 50 IPDAS criteria (78%): 23 out of 23 criteria for content (100%) and 16 out of 20 criteria for the development process (80%). Criteria for effectiveness (n = 7) were not evaluated. An evidence-based PtDA was developed, with the probability of successful VBAC and the availability of structured information as key items. It is likely that the PtDA enhances the quality of decision-making on mode of birth after caesarean section. © 2013 Royal College of Obstetricians and Gynaecologists.
Remote Sensing Applications to Water Quality Management in Florida
NASA Astrophysics Data System (ADS)
Lehrter, J. C.; Schaeffer, B. A.; Hagy, J.; Spiering, B.; Barnes, B.; Hu, C.; Le, C.; McEachron, L.; Underwood, L. W.; Ellis, C.; Fisher, B.
2013-12-01
Optical datasets from estuarine and coastal systems are increasingly available for remote sensing algorithm development, validation, and application. With validated algorithms, the data streams from satellite sensors can provide unprecedented spatial and temporal data for local and regional coastal water quality management. Our presentation will highlight two recent applications of optical data and remote sensing to water quality decision-making in coastal regions of the state of Florida; (1) informing the development of estuarine and coastal nutrient criteria for the state of Florida and (2) informing the rezoning of the Florida Keys National Marine Sanctuary. These efforts involved building up the underlying science to demonstrate the applicability of satellite data as well as an outreach component to educate decision-makers about the use, utility, and uncertainties of remote sensing data products. Scientific developments included testing existing algorithms and generating new algorithms for water clarity and chlorophylla in case II (CDOM or turbidity dominated) estuarine and coastal waters and demonstrating the accuracy of remote sensing data products in comparison to traditional field based measurements. Including members from decision-making organizations on the research team and interacting with decision-makers early and often in the process were key factors for the success of the outreach efforts and the eventual adoption of satellite data into the data records and analyses used in decision-making. Florida coastal water bodies (black boxes) for which remote sensing imagery were applied to derive numeric nutrient criteria and in situ observations (black dots) used to validate imagery. Florida ocean color applied to development of numeric nutrient criteria
A fuzzy Petri-net-based mode identification algorithm for fault diagnosis of complex systems
NASA Astrophysics Data System (ADS)
Propes, Nicholas C.; Vachtsevanos, George
2003-08-01
Complex dynamical systems such as aircraft, manufacturing systems, chillers, motor vehicles, submarines, etc. exhibit continuous and event-driven dynamics. These systems undergo several discrete operating modes from startup to shutdown. For example, a certain shipboard system may be operating at half load or full load or may be at start-up or shutdown. Of particular interest are extreme or "shock" operating conditions, which tend to severely impact fault diagnosis or the progression of a fault leading to a failure. Fault conditions are strongly dependent on the operating mode. Therefore, it is essential that in any diagnostic/prognostic architecture, the operating mode be identified as accurately as possible so that such functions as feature extraction, diagnostics, prognostics, etc. can be correlated with the predominant operating conditions. This paper introduces a mode identification methodology that incorporates both time- and event-driven information about the process. A fuzzy Petri net is used to represent the possible successive mode transitions and to detect events from processed sensor signals signifying a mode change. The operating mode is initialized and verified by analysis of the time-driven dynamics through a fuzzy logic classifier. An evidence combiner module is used to combine the results from both the fuzzy Petri net and the fuzzy logic classifier to determine the mode. Unlike most event-driven mode identifiers, this architecture will provide automatic mode initialization through the fuzzy logic classifier and robustness through the combining of evidence of the two algorithms. The mode identification methodology is applied to an AC Plant typically found as a component of a shipboard system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakowatz, C.V. Jr.; Wahl, D.E.; Thompson, P.A.
1996-12-31
Wavefront curvature defocus effects can occur in spotlight-mode SAR imagery when reconstructed via the well-known polar formatting algorithm (PFA) under certain scenarios that include imaging at close range, use of very low center frequency, and/or imaging of very large scenes. The range migration algorithm (RMA), also known as seismic migration, was developed to accommodate these wavefront curvature effects. However, the along-track upsampling of the phase history data required of the original version of range migration can in certain instances represent a major computational burden. A more recent version of migration processing, the Frequency Domain Replication and Downsampling (FReD) algorithm, obviatesmore » the need to upsample, and is accordingly more efficient. In this paper the authors demonstrate that the combination of traditional polar formatting with appropriate space-variant post-filtering for refocus can be as efficient or even more efficient than FReD under some imaging conditions, as demonstrated by the computer-simulated results in this paper. The post-filter can be pre-calculated from a theoretical derivation of the curvature effect. The conclusion is that the new polar formatting with post filtering algorithm (PF2) should be considered as a viable candidate for a spotlight-mode image formation processor when curvature effects are present.« less
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Security Criteria for Distributed Systems: Functional Requirements.
1995-09-01
Open Company Limited. Ziv , J. and A. Lempel . 1977. A Universal Algorithm for Sequential Data Compression . IEEE Transactions on Information Theory Vol...3, SCF-5 DCF-7. Configurable Cryptographic Algorithms (a) It shall be possible to configure the system such that the data confidentiality functions...use different cryptographic algorithms for different protocols (e.g., mail or interprocess communication data ). (b) The modes of encryption
A Comparison of Hybrid Approaches for Turbofan Engine Gas Path Fault Diagnosis
NASA Astrophysics Data System (ADS)
Lu, Feng; Wang, Yafan; Huang, Jinquan; Wang, Qihang
2016-09-01
A hybrid diagnostic method utilizing Extended Kalman Filter (EKF) and Adaptive Genetic Algorithm (AGA) is presented for performance degradation estimation and sensor anomaly detection of turbofan engine. The EKF is used to estimate engine component performance degradation for gas path fault diagnosis. The AGA is introduced in the integrated architecture and applied for sensor bias detection. The contributions of this work are the comparisons of Kalman Filters (KF)-AGA algorithms and Neural Networks (NN)-AGA algorithms with a unified framework for gas path fault diagnosis. The NN needs to be trained off-line with a large number of prior fault mode data. When new fault mode occurs, estimation accuracy by the NN evidently decreases. However, the application of the Linearized Kalman Filter (LKF) and EKF will not be restricted in such case. The crossover factor and the mutation factor are adapted to the fitness function at each generation in the AGA, and it consumes less time to search for the optimal sensor bias value compared to the Genetic Algorithm (GA). In a word, we conclude that the hybrid EKF-AGA algorithm is the best choice for gas path fault diagnosis of turbofan engine among the algorithms discussed.
NASA Astrophysics Data System (ADS)
Datta, Arjun
2018-03-01
We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love-Rayleigh coupling, but incidence of any mode and coupling to any (other) mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git).
Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems
NASA Astrophysics Data System (ADS)
Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao
2018-09-01
Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
An algorithm for the design and tuning of RF accelerating structures with variable cell lengths
NASA Astrophysics Data System (ADS)
Lal, Shankar; Pant, K. K.
2018-05-01
An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness <3% and RF coupling coefficient close to unity. The proposed design algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.
Structural system identification based on variational mode decomposition
NASA Astrophysics Data System (ADS)
Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.
2018-03-01
In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Astrophysics Data System (ADS)
Jing, Hailong; Su, Xianyu; You, Zhisheng
2017-03-01
A uniaxial three-dimensional shape measurement system with multioperation modes for different modulation algorithms is proposed. To provide a general measurement platform that satisfies the specific measurement requirements in different application scenarios, a measuring system with multioperation modes based on modulation measuring profilometry (MMP) is presented. Unlike the previous solutions, vertical scanning by focusing control of an electronic focus (EF) lens is implemented. The projection of a grating pattern is based on a digital micromirror device, which means fast phase-shifting with high precision. A field programmable gate array-based master control center board acts as the coordinator of the MMP system; it harmonizes the workflows, such as grating projection, focusing control of the EF lens, and fringe pattern capture. Fourier transform, phase-shifting technique, and temporary Fourier transform are used for modulation analysis in different operation modes. The proposed system features focusing control, speed, programmability, compactness, and availability. This paper details the principle of MMP for multioperation modes and the design of the proposed system. The performances of different operation modes are analyzed and compared, and a work piece with steep holes is measured to verify this multimode MMP system.
Technology transfer by means of fault tree synthesis
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.
2012-12-01
Since Fault Tree Analysis (FTA) attempts to model and analyze failure processes of engineering, it forms a common technique for good industrial practice. On the contrary, fault tree synthesis (FTS) refers to the methodology of constructing complex trees either from dentritic modules built ad hoc or from fault tress already used and stored in a Knowledge Base. In both cases, technology transfer takes place in a quasi-inductive mode, from partial to holistic knowledge. In this work, an algorithmic procedure, including 9 activity steps and 3 decision nodes is developed for performing effectively this transfer when the fault under investigation occurs within one of the latter stages of an industrial procedure with several stages in series. The main parts of the algorithmic procedure are: (i) the construction of a local fault tree within the corresponding production stage, where the fault has been detected, (ii) the formation of an interface made of input faults that might occur upstream, (iii) the fuzzy (to count for uncertainty) multicriteria ranking of these faults according to their significance, and (iv) the synthesis of an extended fault tree based on the construction of part (i) and on the local fault tree of the first-ranked fault in part (iii). An implementation is presented, referring to 'uneven sealing of Al anodic film', thus proving the functionality of the developed methodology.
Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano
2007-11-01
We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.
NASA Astrophysics Data System (ADS)
Wang, Ping; Wu, Guangqiang
2013-03-01
Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Automated Identification of MHD Mode Bifurcation and Locking in Tokamaks
NASA Astrophysics Data System (ADS)
Riquezes, J. D.; Sabbagh, S. A.; Park, Y. S.; Bell, R. E.; Morton, L. A.
2017-10-01
Disruption avoidance is critical in reactor-scale tokamaks such as ITER to maintain steady plasma operation and avoid damage to device components. A key physical event chain that leads to disruptions is the appearance of rotating MHD modes, their slowing by resonant field drag mechanisms, and their locking. An algorithm has been developed that automatically detects bifurcation of the mode toroidal rotation frequency due to loss of torque balance under resonant braking, and mode locking for a set of shots using spectral decomposition. The present research examines data from NSTX, NSTX-U and KSTAR plasmas which differ significantly in aspect ratio (ranging from A = 1.3 - 3.5). The research aims to examine and compare the effectiveness of different algorithms for toroidal mode number discrimination, such as phase matching and singular value decomposition approaches, and to examine potential differences related to machine aspect ratio (e.g. mode eigenfunction shape variation). Simple theoretical models will be compared to the dynamics found. Main goals are to detect or potentially forecast the event chain early during a discharge. This would serve as a cue to engage active mode control or a controlled plasma shutdown. Supported by US DOE Contracts DE-SC0016614 and DE-AC02-09CH11466.
Think, blink or sleep on it? The impact of modes of thought on complex decision making.
Newell, Ben R; Wong, Kwan Yao; Cheung, Jeremy C H; Rakow, Tim
2009-04-01
This paper examines controversial claims about the merit of "unconscious thought" for making complex decisions. In four experiments, participants were presented with complex decisions and were asked to choose the best option immediately, after a period of conscious deliberation, or after a period of distraction (said to encourage "unconscious thought processes"). In all experiments the majority of participants chose the option predicted by their own subjective attribute weighting scores, regardless of the mode of thought employed. There was little evidence for the superiority of choices made "unconsciously", but some evidence that conscious deliberation can lead to better choices. The final experiment suggested that the task is best conceptualized as one involving "online judgement" rather than one in which decisions are made after periods of deliberation or distraction. The results suggest that we should be cautious in accepting the advice to "stop thinking" about complex decisions.
Combined monitoring, decision and control model for the human operator in a command and control desk
NASA Technical Reports Server (NTRS)
Muralidharan, R.; Baron, S.
1978-01-01
A report is given on the ongoing efforts to mode the human operator in the context of the task during the enroute/return phases in the ground based control of multiple flights of remotely piloted vehicles (RPV). The approach employed here uses models that have their analytical bases in control theory and in statistical estimation and decision theory. In particular, it draws heavily on the modes and the concepts of the optimal control model (OCM) of the human operator. The OCM is being extended into a combined monitoring, decision, and control model (DEMON) of the human operator by infusing decision theoretic notions that make it suitable for application to problems in which human control actions are infrequent and in which monitoring and decision-making are the operator's main activities. Some results obtained with a specialized version of DEMON for the RPV control problem are included.
Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen
2015-01-01
Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045
Quantum lattice representations for vector solitons in external potentials
NASA Astrophysics Data System (ADS)
Vahala, George; Vahala, Linda; Yepez, Jeffrey
2006-03-01
A quantum lattice algorithm is developed to examine the effect of an external potential well on exactly integrable vector Manakov solitons. It is found that the exact solutions to the coupled nonlinear Schrodinger equations act like quasi-solitons in weak potentials, leading to mode-locking, trapping and untrapping. Stronger potential wells will lead to the emission of radiation modes from the quasi-soliton initial conditions. If the external potential is applied to that particular mode polarization, then the radiation will be trapped within the potential well. The algorithm developed leads to a finite difference scheme that is unconditionally stable. The Manakov system in an external potential is very closely related to the Gross-Pitaevskii equation for the ground state wave functions of a coupled BEC state at T=0 K.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis
NASA Astrophysics Data System (ADS)
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.
2016-07-01
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Aziz, H. M. Abdul; Nagle, Nicholas N.; Morton, April M.; ...
2017-02-06
Here, this study finds the effects of traffic safety, walk-bike network facilities, and land use attributes on walk and bicycle mode choice decision in the New York City for home-to-work commute. Applying the flexible econometric structure of random parameter models, we capture the heterogeneity in the decision making process and simulate scenarios considering improvement in walk-bike infrastructure such as sidewalk width and length of bike lane. Our results indicate that increasing sidewalk width, total length of bike lane, and proportion of protected bike lane will increase the likelihood of more people taking active transportation mode This suggests that the localmore » authorities and planning agencies to invest more on building and maintaining the infrastructure for pedestrians. Furthermore, improvement in traffic safety by reducing traffic crashes involving pedestrians and bicyclists will increase the likelihood of taking active transportation modes. Our results also show positive correlation between number of non-motorized trips by the other family members and the likelihood to choose active transportation mode. The findings will help to make smart investment decisions in context of building sustainable transportation systems accounting for active transportation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Nagle, Nicholas N.; Morton, April M.
Here, this study finds the effects of traffic safety, walk-bike network facilities, and land use attributes on walk and bicycle mode choice decision in the New York City for home-to-work commute. Applying the flexible econometric structure of random parameter models, we capture the heterogeneity in the decision making process and simulate scenarios considering improvement in walk-bike infrastructure such as sidewalk width and length of bike lane. Our results indicate that increasing sidewalk width, total length of bike lane, and proportion of protected bike lane will increase the likelihood of more people taking active transportation mode This suggests that the localmore » authorities and planning agencies to invest more on building and maintaining the infrastructure for pedestrians. Furthermore, improvement in traffic safety by reducing traffic crashes involving pedestrians and bicyclists will increase the likelihood of taking active transportation modes. Our results also show positive correlation between number of non-motorized trips by the other family members and the likelihood to choose active transportation mode. The findings will help to make smart investment decisions in context of building sustainable transportation systems accounting for active transportation.« less
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems
NASA Technical Reports Server (NTRS)
Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon
2009-01-01
Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; ...
2018-02-28
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
Chung, King
2004-01-01
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225
HUMAN DECISIONS AND MACHINE PREDICTIONS.
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil
2018-02-01
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior).
HUMAN DECISIONS AND MACHINE PREDICTIONS*
Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil
2018-01-01
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior) PMID:29755141
Bayesian Decision Theoretical Framework for Clustering
ERIC Educational Resources Information Center
Chen, Mo
2011-01-01
In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Yu, Xiaobing; Yu, Xianrui; Lu, Yiqun
2018-01-01
The evaluation of a meteorological disaster can be regarded as a multiple-criteria decision making problem because it involves many indexes. Firstly, a comprehensive indexing system for an agricultural meteorological disaster is proposed, which includes the disaster rate, the inundated rate, and the complete loss rate. Following this, the relative weights of the three criteria are acquired using a novel proposed evolutionary algorithm. The proposed algorithm consists of a differential evolution algorithm and an evolution strategy. Finally, a novel evaluation model, based on the proposed algorithm and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), is presented to estimate the agricultural meteorological disaster of 2008 in China. The geographic information system (GIS) technique is employed to depict the disaster. The experimental results demonstrated that the agricultural meteorological disaster of 2008 was very serious, especially in Hunan and Hubei provinces. Some useful suggestions are provided to relieve agriculture meteorological disasters. PMID:29597243
Identification of significant intrinsic mode functions for the diagnosis of induction motor fault.
Cho, Sangjin; Shahriar, Md Rifat; Chong, Uipil
2014-08-01
For the analysis of non-stationary signals generated by a non-linear process like fault of an induction motor, empirical mode decomposition (EMD) is the best choice as it decomposes the signal into its natural oscillatory modes known as intrinsic mode functions (IMFs). However, some of these oscillatory modes obtained from a fault signal are not significant as they do not bear any fault signature and can cause misclassification of the fault instance. To solve this issue, a novel IMF selection algorithm is proposed in this work.
Application of mid-frequency ventilation in an animal model of lung injury: a pilot study.
Mireles-Cabodevila, Eduardo; Chatburn, Robert L; Thurman, Tracy L; Zabala, Luis M; Holt, Shirley J; Swearingen, Christopher J; Heulitt, Mark J
2014-11-01
Mid-frequency ventilation (MFV) is a mode of pressure control ventilation based on an optimal targeting scheme that maximizes alveolar ventilation and minimizes tidal volume (VT). This study was designed to compare the effects of conventional mechanical ventilation using a lung-protective strategy with MFV in a porcine model of lung injury. Our hypothesis was that MFV can maximize ventilation at higher frequencies without adverse consequences. We compared ventilation and hemodynamic outcomes between conventional ventilation and MFV. This was a prospective study of 6 live Yorkshire pigs (10 ± 0.5 kg). The animals were subjected to lung injury induced by saline lavage and injurious conventional mechanical ventilation. Baseline conventional pressure control continuous mandatory ventilation was applied with V(T) = 6 mL/kg and PEEP determined using a decremental PEEP trial. A manual decision support algorithm was used to implement MFV using the same conventional ventilator. We measured P(aCO2), P(aO2), end-tidal carbon dioxide, cardiac output, arterial and venous blood oxygen saturation, pulmonary and systemic vascular pressures, and lactic acid. The MFV algorithm produced the same minute ventilation as conventional ventilation but with lower V(T) (-1 ± 0.7 mL/kg) and higher frequency (32.1 ± 6.8 vs 55.7 ± 15.8 breaths/min, P < .002). There were no differences between conventional ventilation and MFV for mean airway pressures (16.1 ± 1.3 vs 16.4 ± 2 cm H2O, P = .75) even when auto-PEEP was higher (0.6 ± 0.9 vs 2.4 ± 1.1 cm H2O, P = .02). There were no significant differences in any hemodynamic measurements, although heart rate was higher during MFV. In this pilot study, we demonstrate that MFV allows the use of higher breathing frequencies and lower V(T) than conventional ventilation to maximize alveolar ventilation. We describe the ventilatory or hemodynamic effects of MFV. We also demonstrate that the application of a decision support algorithm to manage MFV is feasible. Copyright © 2014 by Daedalus Enterprises.
Sugimoto, Katsutoshi; Shiraishi, Junji; Moriyasu, Fuminori; Doi, Kunio
2009-04-01
To develop a computer-aided diagnostic (CAD) scheme for classifying focal liver lesions (FLLs) by use of physicians' subjective classification of echogenic patterns of FLLs on baseline and contrast-enhanced ultrasonography (US). A total of 137 hepatic lesions in 137 patients were evaluated with B-mode and NC100100 (Sonazoid)-enhanced pulse-inversion US; lesions included 74 hepatocellular carcinomas (HCCs) (23: well-differentiated, 36: moderately differentiated, 15: poorly differentiated HCCs), 33 liver metastases, and 30 liver hemangiomas. Three physicians evaluated single images at B-mode and arterial phases with a cine mode. Physicians were asked to classify each lesion into one of eight B-mode and one of eight enhancement patterns, but did not make a diagnosis. To classify five types of FLLs, we employed a decision tree model with four decision nodes and four artificial neural networks (ANNs). The results of the physicians' pattern classifications were used successively for four different ANNs in making decisions at each of the decision nodes in the decision tree model. The classification accuracies for the 137 FLLs were 84.8% for metastasis, 93.3% for hemangioma, and 98.6% for all HCCs. In addition, the classification accuracies for histological differentiation types of HCCs were 65.2% for well-differentiated HCC, 41.7% for moderately differentiated HCC, and 80.0% for poorly differentiated HCC. This CAD scheme has the potential to improve the diagnostic accuracy of liver lesions. However, the accuracy in the histologic differential diagnosis of HCC based on baseline and contrast-enhanced US is still limited.
MADM-based smart parking guidance algorithm
Li, Bo; Pei, Yijian; Wu, Hao; Huang, Dijiang
2017-01-01
In smart parking environments, how to choose suitable parking facilities with various attributes to satisfy certain criteria is an important decision issue. Based on the multiple attributes decision making (MADM) theory, this study proposed a smart parking guidance algorithm by considering three representative decision factors (i.e., walk duration, parking fee, and the number of vacant parking spaces) and various preferences of drivers. In this paper, the expected number of vacant parking spaces is regarded as an important attribute to reflect the difficulty degree of finding available parking spaces, and a queueing theory-based theoretical method was proposed to estimate this expected number for candidate parking facilities with different capacities, arrival rates, and service rates. The effectiveness of the MADM-based parking guidance algorithm was investigated and compared with a blind search-based approach in comprehensive scenarios with various distributions of parking facilities, traffic intensities, and user preferences. Experimental results show that the proposed MADM-based algorithm is effective to choose suitable parking resources to satisfy users’ preferences. Furthermore, it has also been observed that this newly proposed Markov Chain-based availability attribute is more effective to represent the availability of parking spaces than the arrival rate-based availability attribute proposed in existing research. PMID:29236698
An exploration of crowdsourcing citation screening for systematic reviews
Mortensen, Michael L.; Adam, Gaelen P.; Trikalinos, Thomas A.; Kraska, Tim
2017-01-01
Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews. PMID:28677322
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
NASA Astrophysics Data System (ADS)
Kharchenko, K. S.; Vitkovskii, I. L.
2014-02-01
Performance of the secondary coolant circuit rupture algorithm in different operating modes of the Novovoronezh NPP Unit 5 is considered by carrying out studies on a full-scale training simulator. The revealed shortcomings of the algorithm causing excessive actuations of the protection are pointed out, and recommendations for removing them are outlined.
Evaluation of a treatment-based classification algorithm for low back pain: a cross-sectional study.
Stanton, Tasha R; Fritz, Julie M; Hancock, Mark J; Latimer, Jane; Maher, Christopher G; Wand, Benedict M; Parent, Eric C
2011-04-01
Several studies have investigated criteria for classifying patients with low back pain (LBP) into treatment-based subgroups. A comprehensive algorithm was created to translate these criteria into a clinical decision-making guide. This study investigated the translation of the individual subgroup criteria into a comprehensive algorithm by studying the prevalence of patients meeting the criteria for each treatment subgroup and the reliability of the classification. This was a cross-sectional, observational study. Two hundred fifty patients with acute or subacute LBP were recruited from the United States and Australia to participate in the study. Trained physical therapists performed standardized assessments on all participants. The researchers used these findings to classify participants into subgroups. Thirty-one participants were reassessed to determine interrater reliability of the algorithm decision. Based on individual subgroup criteria, 25.2% (95% confidence interval [CI]=19.8%-30.6%) of the participants did not meet the criteria for any subgroup, 49.6% (95% CI=43.4%-55.8%) of the participants met the criteria for only one subgroup, and 25.2% (95% CI=19.8%-30.6%) of the participants met the criteria for more than one subgroup. The most common combination of subgroups was manipulation + specific exercise (68.4% of the participants who met the criteria for 2 subgroups). Reliability of the algorithm decision was moderate (kappa=0.52, 95% CI=0.27-0.77, percentage of agreement=67%). Due to a relatively small patient sample, reliability estimates are somewhat imprecise. These findings provide important clinical data to guide future research and revisions to the algorithm. The finding that 25% of the participants met the criteria for more than one subgroup has important implications for the sequencing of treatments in the algorithm. Likewise, the finding that 25% of the participants did not meet the criteria for any subgroup provides important information regarding potential revisions to the algorithm's bottom table (which guides unclear classifications). Reliability of the algorithm is sufficient for clinical use.
Ezeome, I V; Ezugworie, J O; Udealor, P C
2018-04-01
Through the process of socialization, women and men are conditioned to behave and play different roles in society. While the African culture "rewards" women who have vaginal birth despite the cost to their health, the burden of reproductive decision-making is placed on the menfolk. However, these seem to be changing. Our aim was to assess the beliefs and perceptions of pregnant women about cesarean section (CS), including their views regarding decision-making on the mode of delivery, in Enugu, Southeast Nigeria. A cross-sectional descriptive study. A structured questionnaire was administered to 200 pregnant women, following an oral informed consent. : Statistical Package for the Social Sciences version 17 with descriptive statistics of frequencies and percentages. All the respondents believe that CS is done for the safety of the mother/baby. Thirteen percent reject the procedure for themselves no matter the circumstance. Joint decision-making was the view of two-thirds of the women. Majority of them will accept CS if their husbands consent. Younger women were of the view that husbands decide on the delivery mode (P = 0.019). Culture remains an impediment to CS uptake. Most women preferred joint decision-making on the mode of delivery.
Huang, Vivian W; Prosser, Connie; Kroeker, Karen I; Wang, Haili; Shalapay, Carol; Dhami, Neil; Fedorak, Darryl K; Halloran, Brendan; Dieleman, Levinus A; Goodman, Karen J; Fedorak, Richard N
2015-06-01
Infliximab is an effective therapy for inflammatory bowel disease (IBD). However, more than 50% of patients lose response. Empiric dose intensification is not effective for all patients because not all patients have objective disease activity or subtherapeutic drug level. The aim was to determine how an objective marker of disease activity or therapeutic drug monitoring affects clinical decisions regarding maintenance infliximab therapy in outpatients with IBD. Consecutive patients with IBD on maintenance infliximab therapy were invited to participate by providing preinfusion stool and blood samples. Fecal calprotectin (FCP) and infliximab trough levels (ITLs) were measured by enzyme linked immunosorbent assay. Three decisions were compared: (1) actual clinical decision, (2) algorithmic FCP or ITL decisions, and (3) expert panel decision based on (a) clinical data, (b) clinical data plus FCP, and (c) clinical data plus FCP plus ITL. In secondary analysis, Receiver-operating curves were used to assess the ability of FCP and ITL in predicting clinical disease activity or remission. A total of 36 sets of blood and stool were available for analysis; median FCP 191.5 μg/g, median ITLs 7.3 μg/mL. The actual clinical decision differed from the hypothetical decision in 47.2% (FCP algorithm); 69.4% (ITL algorithm); 25.0% (expert panel clinical decision); 44.4% (expert panel clinical plus FCP); 58.3% (expert panel clinical plus FCP plus ITL) cases. FCP predicted clinical relapse (area under the curve [AUC] = 0.417; 95% confidence interval [CI], 0.197-0.641) and subtherapeutic ITL (AUC = 0.774; 95% CI, 0.536-1.000). ITL predicted clinical remission (AUC = 0.498; 95% CI, 0.254-0.742) and objective remission (AUC = 0.773; 95% CI, 0.622-0.924). Using FCP and ITLs in addition to clinical data results in an increased number of decisions to optimize management in outpatients with IBD on stable maintenance infliximab therapy.
Using Decision Trees to Detect and Isolate Simulated Leaks in the J-2X Rocket Engine
NASA Technical Reports Server (NTRS)
Schwabacher, Mark A.; Aguilar, Robert; Figueroa, Fernando F.
2009-01-01
The goal of this work was to use data-driven methods to automatically detect and isolate faults in the J-2X rocket engine. It was decided to use decision trees, since they tend to be easier to interpret than other data-driven methods. The decision tree algorithm automatically "learns" a decision tree by performing a search through the space of possible decision trees to find one that fits the training data. The particular decision tree algorithm used is known as C4.5. Simulated J-2X data from a high-fidelity simulator developed at Pratt & Whitney Rocketdyne and known as the Detailed Real-Time Model (DRTM) was used to "train" and test the decision tree. Fifty-six DRTM simulations were performed for this purpose, with different leak sizes, different leak locations, and different times of leak onset. To make the simulations as realistic as possible, they included simulated sensor noise, and included a gradual degradation in both fuel and oxidizer turbine efficiency. A decision tree was trained using 11 of these simulations, and tested using the remaining 45 simulations. In the training phase, the C4.5 algorithm was provided with labeled examples of data from nominal operation and data including leaks in each leak location. From the data, it "learned" a decision tree that can classify unseen data as having no leak or having a leak in one of the five leak locations. In the test phase, the decision tree produced very low false alarm rates and low missed detection rates on the unseen data. It had very good fault isolation rates for three of the five simulated leak locations, but it tended to confuse the remaining two locations, perhaps because a large leak at one of these two locations can look very similar to a small leak at the other location.
Computed Tomography Evaluation of Esophagogastric Necrosis After Caustic Ingestion.
Chirica, Mircea; Resche-Rigon, Matthieu; Zagdanski, Anne Marie; Bruzzi, Matthieu; Bouda, Damien; Roland, Eric; Sabatier, François; Bouhidel, Fatiha; Bonnet, Francine; Munoz-Bongrand, Nicolas; Marc Gornet, Jean; Sarfati, Emile; Cattan, Pierre
2016-07-01
Endoscopy is the standard of care for emergency patient evaluation after caustic ingestion. However, the inaccuracy of endoscopy in determining the depth of intramural necrosis may lead to inappropriate decision-making with devastating consequences. Our aim was to evaluate the use of computed tomography (CT) for the emergency diagnostic workup of patients with caustic injuries. In a prospective study, we used a combined endoscopy-CT decision-making algorithm. The primary outcome was pathology-confirmed digestive necrosis. The respective utility of CT and endoscopy in the decision-making process were compared. Transmural endoscopic necrosis was defined as grade 3b injuries; signs of transmural CT necrosis included absence of postcontrast gastric/ esophageal-wall enhancement, esophageal-wall blurring, and periesophageal-fat blurring. We included 120 patients (59 men, median age 44 years). Emergency surgery was performed in 24 patients (20%) and digestive resection was completed in 16. Three patients (3%) died and 28 patients (23%) experienced complications. Pathology revealed transmural necrosis in 9/11 esophagectomy and 16/16 gastrectomy specimens. Severe oropharyngeal injuries (P = 0.015), increased levels of blood lactate (P = 0.007), alanine aminotransferase (P = 0.027), bilirubin (P = 0.005), and low platelet counts (P > 0.0001) were predictive of digestive necrosis. Decision-making relying on CT alone or on a combined CT-endoscopy algorithm was similar and would have spared 19 unnecessary esophagectomies and 16 explorative laparotomies compared with an endoscopy-alone algorithm. Endoscopy did never rectify a wrong CT decision. Emergency decision-making after caustic injuries can rely on CT alone.
Lee, Saro; Park, Inhye
2013-09-30
Subsidence of ground caused by underground mines poses hazards to human life and property. This study analyzed the hazard to ground subsidence using factors that can affect ground subsidence and a decision tree approach in a geographic information system (GIS). The study area was Taebaek, Gangwon-do, Korea, where many abandoned underground coal mines exist. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 50/50 for training and validation of the models. A data-mining classification technique was applied to the GSH mapping, and decision trees were constructed using the chi-squared automatic interaction detector (CHAID) and the quick, unbiased, and efficient statistical tree (QUEST) algorithms. The frequency ratio model was also applied to the GSH mapping for comparing with probabilistic model. The resulting GSH maps were validated using area-under-the-curve (AUC) analysis with the subsidence area data that had not been used for training the model. The highest accuracy was achieved by the decision tree model using CHAID algorithm (94.01%) comparing with QUEST algorithms (90.37%) and frequency ratio model (86.70%). These accuracies are higher than previously reported results for decision tree. Decision tree methods can therefore be used efficiently for GSH analysis and might be widely used for prediction of various spatial events. Copyright © 2013. Published by Elsevier Ltd.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
A mixed-mode traffic assignment model with new time-flow impedance function
NASA Astrophysics Data System (ADS)
Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang
2018-01-01
Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Cicone, R. C.; Stinson, J. L.; Balon, R. J.
1977-01-01
The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold.
NASA Astrophysics Data System (ADS)
Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.
2011-06-01
The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
Traffic sharing algorithms for hybrid mobile networks
NASA Technical Reports Server (NTRS)
Arcand, S.; Murthy, K. M. S.; Hafez, R.
1995-01-01
In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.
Renard, P; Van Breusegem, V; Nguyen, M T; Naveau, H; Nyns, E J
1991-10-20
An adaptive control algorithm has been implemented on a biomethanation process to maintain propionate concentration, a stable variable, at a given low value, by steering the dilution rate. It was thereby expected to ensure the stability of the process during the startup and during steady-state running with an acceptable performance. The methane pilot reactor was operated in the completely mixed, once-through mode and computer-controlled during 161 days. The results yielded the real-life validation of the adaptive control algorithm, and documented the stability and acceptable performance expected.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
NASA Technical Reports Server (NTRS)
Dinar, N.
1978-01-01
Several aspects of multigrid methods are briefly described. The main subjects include the development of very efficient multigrid algorithms for systems of elliptic equations (Cauchy-Riemann, Stokes, Navier-Stokes), as well as the development of control and prediction tools (based on local mode Fourier analysis), used to analyze, check and improve these algorithms. Preliminary research on multigrid algorithms for time dependent parabolic equations is also described. Improvements in existing multigrid processes and algorithms for elliptic equations were studied.
Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models
NASA Technical Reports Server (NTRS)
Mjoisness, Eric; Castano, Rebecca; Gray, Alexander
1999-01-01
We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.
Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N
2018-02-01
Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.
Zhao, Wei; Wang, Han
2016-06-28
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.
Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.
Naso, David; Turchiano, Biagio
2005-04-01
In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation
Zhao, Wei; Wang, Han
2016-01-01
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages. PMID:27367691
Learning accurate very fast decision trees from uncertain data streams
NASA Astrophysics Data System (ADS)
Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo
2015-12-01
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.
Intelligent Diagnostic Assistant for Complicated Skin Diseases through C5's Algorithm.
Jeddi, Fatemeh Rangraz; Arabfard, Masoud; Kermany, Zahra Arab
2017-09-01
Intelligent Diagnostic Assistant can be used for complicated diagnosis of skin diseases, which are among the most common causes of disability. The aim of this study was to design and implement a computerized intelligent diagnostic assistant for complicated skin diseases through C5's Algorithm. An applied-developmental study was done in 2015. Knowledge base was developed based on interviews with dermatologists through questionnaires and checklists. Knowledge representation was obtained from the train data in the database using Excel Microsoft Office. Clementine Software and C5's Algorithms were applied to draw the decision tree. Analysis of test accuracy was performed based on rules extracted using inference chains. The rules extracted from the decision tree were entered into the CLIPS programming environment and the intelligent diagnostic assistant was designed then. The rules were defined using forward chaining inference technique and were entered into Clips programming environment as RULE. The accuracy and error rates obtained in the training phase from the decision tree were 99.56% and 0.44%, respectively. The accuracy of the decision tree was 98% and the error was 2% in the test phase. Intelligent diagnostic assistant can be used as a reliable system with high accuracy, sensitivity, specificity, and agreement.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2008-01-01
The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.
Pollitz, F.F.
2002-01-01
I present a new algorithm for calculating seismic wave propagation through a three-dimensional heterogeneous medium using the framework of mode coupling theory originally developed to perform very low frequency (f < ???0.01-0.05 Hz) seismic wavefield computation. It is a Greens function approach for multiple scattering within a defined volume and employs a truncated traveling wave basis set using the locked mode approximation. Interactions between incident and scattered wavefields are prescribed by mode coupling theory and account for the coupling among surface waves, body waves, and evanescent waves. The described algorithm is, in principle, applicable to global and regional wave propagation problems, but I focus on higher frequency (typically f ??????0.25 Hz) applications at regional and local distances where the locked mode approximation is best utilized and which involve wavefields strongly shaped by propagation through a highly heterogeneous crust. Synthetic examples are shown for P-SV-wave propagation through a semi-ellipsoidal basin and SH-wave propagation through a fault zone.
Evolutionary Algorithm Based Automated Reverse Engineering and Defect Discovery
2007-09-21
a previous application of a GP as a data mining function to evolve fuzzy decision trees symbolically [3-5], the terminal set consisted of fuzzy...of input and output information is required. In the case of fuzzy decision trees, the database represented a collection of scenarios about which the...fuzzy decision tree to be evolved would make decisions . The database also had entries created by experts representing decisions about the scenarios
SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, G
Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth,more » Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.« less
Implementation science: a role for parallel dual processing models of reasoning?
Sladek, Ruth M; Phillips, Paddy A; Bond, Malcolm J
2006-01-01
Background A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Discussion Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice. Summary It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice. PMID:16725023
Implementation science: a role for parallel dual processing models of reasoning?
Sladek, Ruth M; Phillips, Paddy A; Bond, Malcolm J
2006-05-25
A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making. Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice. It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice.
Artificial Intelligence based technique for BTS placement
NASA Astrophysics Data System (ADS)
Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.
2013-12-01
The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
Wibowo, Santoso; Deng, Hepu
2015-06-01
This paper presents a multi-criteria group decision making approach for effectively evaluating the performance of e-waste recycling programs under uncertainty in an organization. Intuitionistic fuzzy numbers are used for adequately representing the subjective and imprecise assessments of the decision makers in evaluating the relative importance of evaluation criteria and the performance of individual e-waste recycling programs with respect to individual criteria in a given situation. An interactive fuzzy multi-criteria decision making algorithm is developed for facilitating consensus building in a group decision making environment to ensure that all the interest of individual decision makers have been appropriately considered in evaluating alternative e-waste recycling programs with respect to their corporate sustainability performance. The developed algorithm is then incorporated into a multi-criteria decision support system for making the overall performance evaluation process effectively and simple to use. Such a multi-criteria decision making system adequately provides organizations with a proactive mechanism for incorporating the concept of corporate sustainability into their regular planning decisions and business practices. An example is presented for demonstrating the applicability of the proposed approach in evaluating the performance of e-waste recycling programs in organizations. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Theoretical Analysis of Why Hybrid Ensembles Work
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296
Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin
2015-01-01
A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.
A tunable algorithm for collective decision-making.
Pratt, Stephen C; Sumpter, David J T
2006-10-24
Complex biological systems are increasingly understood in terms of the algorithms that guide the behavior of system components and the information pathways that link them. Much attention has been given to robust algorithms, or those that allow a system to maintain its functions in the face of internal or external perturbations. At the same time, environmental variation imposes a complementary need for algorithm versatility, or the ability to alter system function adaptively as external circumstances change. An important goal of systems biology is thus the identification of biological algorithms that can meet multiple challenges rather than being narrowly specified to particular problems. Here we show that emigrating colonies of the ant Temnothorax curvispinosus tune the parameters of a single decision algorithm to respond adaptively to two distinct problems: rapid abandonment of their old nest in a crisis and deliberative selection of the best available new home when their old nest is still intact. The algorithm uses a stepwise commitment scheme and a quorum rule to integrate information gathered by numerous individual ants visiting several candidate homes. By varying the rates at which they search for and accept these candidates, the ants yield a colony-level response that adaptively emphasizes either speed or accuracy. We propose such general but tunable algorithms as a design feature of complex systems, each algorithm providing elegant solutions to a wide range of problems.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Radar Detection of Marine Mammals
2010-09-30
associative tracker using the Munkres algorithm was used. This was then expanded to include a track - before - detect algorithm, the Baysean Field...small, slow moving objects (i.e. whales). In order to address the third concern (M2 mode), we have tested using a track - before - detect tracker termed
Challenges of CAC in Heterogeneous Wireless Cognitive Networks
NASA Astrophysics Data System (ADS)
Wang, Jiazheng; Fu, Xiuhua
Call admission control (CAC) is known as an effective functionality in ensuring the QoS of wireless networks. The vision of next generation wireless networks has led to the development of new call admission control (CAC) algorithms specifically designed for heterogeneous wireless Cognitive networks. However, there will be a number of challenges created by dynamic spectrum access and scheduling techniques associated with the cognitive systems. In this paper for the first time, we recommend that the CAC policies should be distinguished between primary users and secondary users. The classification of different methods of cac policies in cognitive networks contexts is proposed. Although there have been some researches within the umbrella of Joint CAC and cross-layer optimization for wireless networks, the advent of the cognitive networks adds some additional problems. We present the conceptual models for joint CAC and cross-layer optimization respectively. Also, the benefit of Cognition can only be realized fully if application requirements and traffic flow contexts are determined or inferred in order to know what modes of operation and spectrum bands to use at each point in time. The process model of Cognition involved per-flow-based CAC is presented. Because there may be a number of parameters on different levels affecting a CAC decision and the conditions for accepting or rejecting a call must be computed quickly and frequently, simplicity and practicability are particularly important for designing a feasible CAC algorithm. In a word, a more thorough understanding of CAC in heterogeneous wireless cognitive networks may help one to design better CAC algorithms.
Undertriage in older emergency department patients--tilting against windmills?
Grossmann, Florian F; Zumbrunn, Thomas; Ciprian, Sandro; Stephan, Frank-Peter; Woy, Natascha; Bingisser, Roland; Nickel, Christian H
2014-01-01
The aim of this study was to investigate the long-term effect of a teaching intervention designed to reduce undertriage rates in older ED patients. Further, to test the hypothesis that non-adherence to the Emergency Severity Index (ESI) triage algorithm is associated with undertriage. Additionally, to detect patient related risk factors for undertriage. Pre-post-test design. The study sample consisted of all patients aged 65 years or older presenting to the ED of an urban tertiary and primary care center in the study periods. A teaching intervention designed to increase adherence to the triage algorithm. To assess, if the intervention resulted in an increase of factual knowledge, nurses took a test before and immediately after the teaching intervention. Undertriage rates were assessed one year after the intervention and compared to the pre-test period. In the pre-test group 519 patients were included, and 394 in the post-test-group. Factual knowledge among triage nurses was high already before the teaching intervention. Prevalence of undertriaged patients before (22.5%) and one year after the intervention (24.2%) was not significantly different (χ2 = 0.248, df = 1, p = 0.619). Sex, age, mode of arrival, and type of complaint were not identified as independent risk factors for undertriage. However, undertriage rates increased with advancing age. Adherence to the ESI algorithm is associated with correct triage decisions. Undertriage of older ED patients remained unchanged over time. Reasons for undertriage seem to be more complex than anticipated. Therefore, additional contributing factors should be addressed.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.
Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin
2017-06-01
Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
A probabilistic, distributed, recursive mechanism for decision-making in the brain
Gurney, Kevin N.
2018-01-01
Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. PMID:29614077
Masías, Víctor H.; Krause, Mariane; Valdés, Nelson; Pérez, J. C.; Laengle, Sigifredo
2015-01-01
Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBTree, and REPTree) are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice. PMID:25914657
Masías, Víctor H; Krause, Mariane; Valdés, Nelson; Pérez, J C; Laengle, Sigifredo
2015-01-01
Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBTree, and REPTree) are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice.
Aguirre-Junco, Angel-Ricardo; Colombet, Isabelle; Zunino, Sylvain; Jaulent, Marie-Christine; Leneveut, Laurence; Chatellier, Gilles
2004-01-01
The initial step for the computerization of guidelines is the knowledge specification from the prose text of guidelines. We describe a method of knowledge specification based on a structured and systematic analysis of text allowing detailed specification of a decision tree. We use decision tables to validate the decision algorithm and decision trees to specify and represent this algorithm, along with elementary messages of recommendation. Edition tools are also necessary to facilitate the process of validation and workflow between expert physicians who will validate the specified knowledge and computer scientist who will encode the specified knowledge in a guide-line model. Applied to eleven different guidelines issued by an official agency, the method allows a quick and valid computerization and integration in a larger decision support system called EsPeR (Personalized Estimate of Risks). The quality of the text guidelines is however still to be developed further. The method used for computerization could help to define a framework usable at the initial step of guideline development in order to produce guidelines ready for electronic implementation.
Interpretable Categorization of Heterogeneous Time Series Data
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Silbermann, Joshua
2017-01-01
We analyze data from simulated aircraft encounters to validate and inform the development of a prototype aircraft collision avoidance system. The high-dimensional and heterogeneous time series dataset is analyzed to discover properties of near mid-air collisions (NMACs) and categorize the NMAC encounters. Domain experts use these properties to better organize and understand NMAC occurrences. Existing solutions either are not capable of handling high-dimensional and heterogeneous time series datasets or do not provide explanations that are interpretable by a domain expert. The latter is critical to the acceptance and deployment of safety-critical systems. To address this gap, we propose grammar-based decision trees along with a learning algorithm. Our approach extends decision trees with a grammar framework for classifying heterogeneous time series data. A context-free grammar is used to derive decision expressions that are interpretable, application-specific, and support heterogeneous data types. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to a simulated aircraft encounter dataset and evaluate the performance of four variants of our learning algorithm. The best algorithm is used to analyze and categorize near mid-air collisions in the aircraft encounter dataset. We describe each discovered category in detail and discuss its relevance to aircraft collision avoidance.
NASA Astrophysics Data System (ADS)
Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.
2013-02-01
In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.
2018-03-01
With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.
Evolutionary Initial Poses of Reduced D.O.F’s Quadruped Robot
NASA Astrophysics Data System (ADS)
Iida, Ken-Ichi; Nakata, Yoshitaka; Hira, Toshio; Kamano, Takuya; Suzuki, Takayuki
In this paper, an application of genetic algorithm for generation of evolutionary initial poses of a quadrupedal robot which reduced degrees of freedom is described. To reduce degree of freedom, each leg of the robot has a slider-crank mechanism and is driven by an actuator. Furthermore we introduced the forward movement mode and the rotating mode because the omnidirection movement should be made possible. To generate the suitable initial pose, the initial angle of four legs are coded under gray code and tuned by an estimation function in each mode with the genetic algorithm. As a result of generation, the cooperation of the legs is realized to move toward the omnidirection. The experimental results demonstrate that the proposed scheme is effective for generation of the suitable initial poses and the robot can walk smoothly with the generated patterns.
Development of multi-class, multi-criteria bicycle traffic assignment models and solution algorithms
DOT National Transportation Integrated Search
2015-08-31
Cycling is gaining popularity both as a mode of travel in urban communities and as an alternative mode to private motorized vehicles due to its wide range of benefits (health, environmental, and economical). However, this change in modal share is not...
Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography
2017-05-01
contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human
Peinemann, Frank; Kleijnen, Jos
2015-01-01
Objectives To develop an algorithm that aims to provide guidance and awareness for choosing multiple study designs in systematic reviews of healthcare interventions. Design Method study: (1) To summarise the literature base on the topic. (2) To apply the integration of various study types in systematic reviews. (3) To devise decision points and outline a pragmatic decision tree. (4) To check the plausibility of the algorithm by backtracking its pathways in four systematic reviews. Results (1) The results of our systematic review of the published literature have already been published. (2) We recaptured the experience from our four previously conducted systematic reviews that required the integration of various study types. (3) We chose length of follow-up (long, short), frequency of events (rare, frequent) and types of outcome as decision points (death, disease, discomfort, disability, dissatisfaction) and aligned the study design labels according to the Cochrane Handbook. We also considered practical or ethical concerns, and the problem of unavailable high-quality evidence. While applying the algorithm, disease-specific circumstances and aims of interventions should be considered. (4) We confirmed the plausibility of the pathways of the algorithm. Conclusions We propose that the algorithm can assist to bring seminal features of a systematic review with multiple study designs to the attention of anyone who is planning to conduct a systematic review. It aims to increase awareness and we think that it may reduce the time burden on review authors and may contribute to the production of a higher quality review. PMID:26289450
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
Mohammadi-Abdar, Hassan; Ridgel, Angela L.; Discenzo, Fred M.; Loparo, Kenneth A.
2016-01-01
Recent studies in rehabilitation of Parkinson’s disease (PD) have shown that cycling on a tandem bike at a high pedaling rate can reduce the symptoms of the disease. In this research, a smart motorized bicycle has been designed and built for assisting Parkinson’s patients with exercise to improve motor function. The exercise bike can accurately control the rider’s experience at an accelerated pedaling rate while capturing real-time test data. Here, the design and development of the electronics and hardware as well as the software and control algorithms are presented. Two control algorithms have been developed for the bike; one that implements an inertia load (static mode) and one that implements a speed reference (dynamic mode). In static mode the bike operates as a regular exercise bike with programmable resistance (load) that captures and records the required signals such as heart rate, cadence and power. In dynamic mode the bike operates at a user-selected speed (cadence) with programmable variability in speed that has been shown to be essential to achieving the desired motor performance benefits for PD patients. In addition, the flexible and extensible design of the bike permits readily changing the control algorithm and incorporating additional I/O as needed to provide a wide range of riding experiences. Furthermore, the network-enabled controller provides remote access to bike data during a riding session. PMID:27298575
Flexible High Speed Codec (FHSC)
NASA Technical Reports Server (NTRS)
Segallis, G. P.; Wernlund, J. V.
1991-01-01
The ongoing NASA/Harris Flexible High Speed Codec (FHSC) program is described. The program objectives are to design and build an encoder decoder that allows operation in either burst or continuous modes at data rates of up to 300 megabits per second. The decoder handles both hard and soft decision decoding and can switch between modes on a burst by burst basis. Bandspreading is low since the code rate is greater than or equal to 7/8. The encoder and a hard decision decoder fit on a single application specific integrated circuit (ASIC) chip. A soft decision applique is implemented using 300 K emitter coupled logic (ECL) which can be easily translated to an ECL gate array.
Modal Identification of Tsing MA Bridge by Using Improved Eigensystem Realization Algorithm
NASA Astrophysics Data System (ADS)
QIN, Q.; LI, H. B.; QIAN, L. Z.; LAU, C.-K.
2001-10-01
This paper presents the results of research work on modal identification of Tsing Ma bridge ambient testing data by using an improved eigensystem realization algorithm. The testing was carried out before the bridge was open to traffic and after the completion of surfacing. Without traffic load, ambient excitations were much less intensive, and the bridge responses to such ambient excitation were also less intensive. Consequently, the bridge responses were significantly influenced by the random movement of heavy construction vehicles on the deck. To cut off noises in the testing data and make the ambient signals more stationary, the Chebyshev digital filter was used instead of the digital filter with a Hanning window. Random decrement (RD) functions were built to convert the ambient responses to free vibrations. An improved eigensystem realization algorithm was employed to improve the accuracy and the efficiency of modal identification. It uses cross-correlation functions ofRD functions to form the Hankel matrix instead of RD functions themselves and uses eigenvalue decomposition instead of singular value decomposition. The data for response accelerations were acquired group by group because of limited number of high-quality accelerometers and channels of data loggers available. The modes were identified group by group and then assembled by using response accelerations acquired at reference points to form modes of the complete bridge. Seventy-nine modes of the Tsing Ma bridge were identified, including five complex modes formed in accordance with unevenly distributed damping in the bridge. The identified modes in time domain were then compared with those identified in frequency domain and finite element analytical results.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
Bridge health monitoring metrics : updating the bridge deficiency algorithm.
DOT National Transportation Integrated Search
2009-10-01
As part of its bridge management system, the Alabama Department of Transportation (ALDOT) must decide how best to spend its bridge replacement funds. In making these decisions, ALDOT managers currently use a deficiency algorithm to rank bridges that ...
A Two-Wheel Observing Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.
2001-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE). Due to the MAP project's limited mass, power, and budget, a traditional reliability concept including fully redundant components was not feasible. The MAP design employs selective hardware redundancy, along with backup software modes and algorithms, to improve the odds of mission success. This paper describes the effort to develop a backup control mode, known as Observing II, that will allow the MAP science mission to continue in the event of a failure of one of its three reaction wheel assemblies. This backup science mode requires a change from MAP's nominal zero-momentum control system to a momentum-bias system. In this system, existing thruster-based control modes are used to establish a momentum bias about the sun line sufficient to spin the spacecraft up to the desired scan rate. Natural spacecraft dynamics exhibits spin and nutation similar to the nominal MAP science mode with different relative rotation rates, so the two reaction wheels are used to establish and maintain the desired nutation angle from the sun line. Detailed descriptions of the ObservingII control algorithm and simulation results will be presented, along with the operational considerations of performing the rest of MAP's necessary functions with only two wheels.
Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.
Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin
2017-08-16
The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.
Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection
Kaliappan, Jayakumar; Thiagarajan, Revathi; Sundararajan, Karpagam
2015-01-01
An intrusion detection system (IDS) helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU), there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate. PMID:26295058
Fusion of Heterogeneous Intrusion Detection Systems for Network Attack Detection.
Kaliappan, Jayakumar; Thiagarajan, Revathi; Sundararajan, Karpagam
2015-01-01
An intrusion detection system (IDS) helps to identify different types of attacks in general, and the detection rate will be higher for some specific category of attacks. This paper is designed on the idea that each IDS is efficient in detecting a specific type of attack. In proposed Multiple IDS Unit (MIU), there are five IDS units, and each IDS follows a unique algorithm to detect attacks. The feature selection is done with the help of genetic algorithm. The selected features of the input traffic are passed on to the MIU for processing. The decision from each IDS is termed as local decision. The fusion unit inside the MIU processes all the local decisions with the help of majority voting rule and makes the final decision. The proposed system shows a very good improvement in detection rate and reduces the false alarm rate.
Artifact removal from EEG data with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.
2017-03-01
In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.
Flywheel Charge/Discharge Control Developed
NASA Technical Reports Server (NTRS)
Beach, Raymond.F.; Kenny, Barbara H.
2001-01-01
A control algorithm developed at the NASA Glenn Research Center will allow a flywheel energy storage system to interface with the electrical bus of a space power system. The controller allows the flywheel to operate in both charge and discharge modes. Charge mode is used to store additional energy generated by the solar arrays on the spacecraft during insolation. During charge mode, the flywheel spins up to store the additional electrical energy as rotational mechanical energy. Discharge mode is used during eclipse when the flywheel provides the power to the spacecraft. During discharge mode, the flywheel spins down to release the stored rotational energy.
NASA Astrophysics Data System (ADS)
Hashimoto, Makiko; Nakajima, Teruyuki
2017-06-01
We developed a satellite remote sensing algorithm to retrieve the aerosol optical properties using satellite-received radiances for multiple wavelengths and pixels. Our algorithm utilizes spatial inhomogeneity of surface reflectance to retrieve aerosol properties, and the main target is urban aerosols. This algorithm can simultaneously retrieve aerosol optical thicknesses (AOT) for fine- and coarse-mode aerosols, soot volume fraction in fine-mode aerosols (SF), and surface reflectance over heterogeneous surfaces such as urban areas that are difficult to obtain by conventional pixel-by-pixel methods. We applied this algorithm to radiances measured by the Greenhouse Gases Observing Satellite/Thermal and Near Infrared Sensor for Carbon Observations-Cloud and Aerosol Image (GOSAT/TANSO-CAI) at four wavelengths and were able to retrieve the aerosol parameters in several urban regions and other surface types. A comparison of the retrieved AOTs with those from the Aerosol Robotic Network (AERONET) indicated retrieval accuracy within ±0.077 on average. It was also found that the column-averaged SF and the aerosol single scattering albedo (SSA) underwent seasonal changes as consistent with the ground surface measurements of SSA and black carbon at Beijing, China.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Analysis of methods of processing of expert information by optimization of administrative decisions
NASA Astrophysics Data System (ADS)
Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.
2018-03-01
In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.
Assessing an AI knowledge-base for asymptomatic liver diseases.
Babic, A; Mathiesen, U; Hedin, K; Bodemar, G; Wigertz, O
1998-01-01
Discovering not yet seen knowledge from clinical data is of importance in the field of asymptomatic liver diseases. Avoidance of liver biopsy which is used as the ultimate confirmation of diagnosis by making the decision based on relevant laboratory findings only, would be considered an essential support. The system based on Quinlan's ID3 algorithm was simple and efficient in extracting the sought knowledge. Basic principles of applying the AI systems are therefore described and complemented with medical evaluation. Some of the diagnostic rules were found to be useful as decision algorithms i.e. they could be directly applied in clinical work and made a part of the knowledge-base of the Liver Guide, an automated decision support system.
Mathematical and Statistical Software Index.
1986-08-01
geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis
Quasinormal modes of Reissner-Nordstrom black holes
NASA Technical Reports Server (NTRS)
Leaver, Edward W.
1990-01-01
A matrix-eigenvalue algorithm is presented for accurately computing the quasi-normal frequencies and modes of charged static blackholes. The method is then refined through the introduction of a continued-fraction step. The approach should generalize to a variety of nonseparable wave equations, including the Kerr-Newman case of charged rotating blackholes.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
Malo, Sergio; Fateri, Sina; Livadas, Makis; Mares, Cristinel; Gan, Tat-Hean
2017-07-01
Ultrasonic guided waves testing is a technique successfully used in many industrial scenarios worldwide. For many complex applications, the dispersive nature and multimode behavior of the technique still poses a challenge for correct defect detection capabilities. In order to improve the performance of the guided waves, a 2-D compressed pulse analysis is presented in this paper. This novel technique combines the use of pulse compression and dispersion compensation in order to improve the signal-to-noise ratio (SNR) and temporal-spatial resolution of the signals. The ability of the technique to discriminate different wave modes is also highlighted. In addition, an iterative algorithm is developed to identify the wave modes of interest using adaptive peak detection to enable automatic wave mode discrimination. The employed algorithm is developed in order to pave the way for further in situ applications. The performance of Barker-coded and chirp waveforms is studied in a multimodal scenario where longitudinal and flexural wave packets are superposed. The technique is tested in both synthetic and experimental conditions. The enhancements in SNR and temporal resolution are quantified as well as their ability to accurately calculate the propagation distance for different wave modes.
Callewaert, Francois; Butun, Serkan; Li, Zhongyang; Aydin, Koray
2016-01-01
The objective-first inverse-design algorithm is used to design an ultra-compact optical diode. Based on silicon and air only, this optical diode relies on asymmetric spatial mode conversion between the left and right ports. The first even mode incident from the left port is transmitted to the right port after being converted into an odd mode. On the other hand, same mode incident from the right port is reflected back by the optical diode dielectric structure. The convergence and performance of the algorithm are studied, along with a transform method that converts continuous permittivity medium into a binary material design. The optimal device is studied with full-wave electromagnetic simulations to compare its behavior under right and left incidences, in 2D and 3D settings as well. A parametric study is designed to understand the impact of the design space size and initial conditions on the optimized devices performance. A broadband optical diode behavior is observed after optimization, with a large rejection ratio between the two transmission directions. This illustrates the potential of the objective-first inverse-design method to design ultra-compact broadband photonic devices. PMID:27586852
Computation of elementary modes: a unifying framework and the new binary approach
Gagneur, Julien; Klamt, Steffen
2004-01-01
Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509