Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.
Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward
2006-08-01
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Information efficiency in visual communication
NASA Astrophysics Data System (ADS)
Alter-Gartenberg, Rachel; Rahman, Zia-ur
1993-08-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Information efficiency in visual communication
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1993-01-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
A Simplified GCS-DCSK Modulation and Its Performance Optimization
NASA Astrophysics Data System (ADS)
Xu, Weikai; Wang, Lin; Chi, Chong-Yung
2016-12-01
In this paper, a simplified Generalized Code-Shifted Differential Chaos Shift Keying (GCS-DCSK) whose transmitter never needs any delay circuits, is proposed. However, its performance is deteriorated because the orthogonality between substreams cannot be guaranteed. In order to optimize its performance, the system model of the proposed GCS-DCSK with power allocations on substreams is presented. An approximate bit error rate (BER) expression of the proposed model, which is a function of substreams’ power, is derived using Gaussian Approximation. Based on the BER expression, an optimal power allocation strategy between information substreams and reference substream is obtained. Simulation results show that the BER performance of the proposed GCS-DCSK with the optimal power allocation can be significantly improved when the number of substreams M is large.
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Real-time implementation of second generation of audio multilevel information coding
NASA Astrophysics Data System (ADS)
Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.
1994-03-01
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Huang, Min; Liu, Zhaoqing; Qiao, Liyan
2014-10-10
While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.
Huang, Min; Liu, Zhaoqing; Qiao, Liyan
2014-01-01
While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Power allocation strategies to minimize energy consumption in wireless body area networks.
Kailas, Aravind
2011-01-01
The wide scale deployment of wireless body area networks (WBANs) hinges on designing energy efficient communication protocols to support the reliable communication as well as to prolong the network lifetime. Cooperative communications, a relatively new idea in wireless communications, offers the benefits of multi-antenna systems, thereby improving the link reliability and boosting energy efficiency. In this short paper, the advantages of resorting to cooperative communications for WBANs in terms of minimized energy consumption are investigated. Adopting an energy model that encompasses energy consumptions in the transmitter and receiver circuits, and transmitting energy per bit, it is seen that cooperative transmission can improve energy efficiency of the wireless network. In particular, the problem of optimal power allocation is studied with the constraint of targeted outage probability. Two strategies of power allocation are considered: power allocation with and without posture state information. Using analysis and simulation-based results, two key points are demonstrated: (i) allocating power to the on-body sensors making use of the posture information can reduce the total energy consumption of the WBAN; and (ii) when the channel condition is good, it is better to recruit less relays for cooperation to enhance energy efficiency.
Branch target buffer design and optimization
NASA Technical Reports Server (NTRS)
Perleberg, Chris H.; Smith, Alan J.
1993-01-01
Consideration is given to two major issues in the design of branch target buffers (BTBs), with the goal of achieving maximum performance for a given number of bits allocated to the BTB design. The first issue is BTB management; the second is what information to keep in the BTB. A number of solutions to these problems are reviewed, and various optimizations in the design of BTBs are discussed. Design target miss ratios for BTBs are developed, making it possible to estimate the performance of BTBs for real workloads.
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Wang, Wei; Wang, Chunqiu; Zhao, Min
2014-03-01
To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.
NASA Astrophysics Data System (ADS)
Andreotti, Riccardo; Del Fiorentino, Paolo; Giannetti, Filippo; Lottici, Vincenzo
2016-12-01
This work proposes a distributed resource allocation (RA) algorithm for packet bit-interleaved coded OFDM transmissions in the uplink of heterogeneous networks (HetNets), characterized by small cells deployed over a macrocell area and sharing the same band. Every user allocates its transmission resources, i.e., bits per active subcarrier, coding rate, and power per subcarrier, to minimize the power consumption while both guaranteeing a target quality of service (QoS) and accounting for the interference inflicted by other users transmitting over the same band. The QoS consists of the number of information bits delivered in error-free packets per unit of time, or goodput (GP), estimated at the transmitter by resorting to an efficient effective SNR mapping technique. First, the RA problem is solved in the point-to-point case, thus deriving an approximate yet accurate closed-form expression for the power allocation (PA). Then, the interference-limited HetNet case is examined, where the RA problem is described as a non-cooperative game, providing a solution in terms of generalized Nash equilibrium. Thanks to the closed-form of the PA, the solution analysis is based on the best response concept. Hence, sufficient conditions for existence and uniqueness of the solution are analytically derived, along with a distributed algorithm capable of reaching the game equilibrium.
A New Approach for Fingerprint Image Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less
Wireless visual sensor network resource allocation using cross-layer optimization
NASA Astrophysics Data System (ADS)
Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.
2009-01-01
In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
Fuel management optimization using genetic algorithms and expert knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1996-09-01
The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.
Hu, Sudeng; Wang, Hanli; Kwong, Sam
2012-04-01
In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
NASA Astrophysics Data System (ADS)
Huang, Zhiqiang; Xie, Dou; Xie, Bing; Zhang, Wenlin; Zhang, Fuxiao; He, Lei
2018-03-01
The undesired stick-slip vibration is the main source of PDC bit failure, such as tooth fracture and tooth loss. So, the study of PDC bit failure base on stick-slip vibration analysis is crucial to prolonging the service life of PDC bit and improving ROP (rate of penetration). For this purpose, a piecewise-smooth torsional model with 4-DOF (degree of freedom) of drilling string system plus PDC bit is proposed to simulate non-impact drilling. In this model, both the friction and cutting behaviors of PDC bit are innovatively introduced. The results reveal that PDC bit is easier to fail than other drilling tools due to the severer stick-slip vibration. Moreover, reducing WOB (weight on bit) and improving driving torque can effectively mitigate the stick-slip vibration of PDC bit. Therefore, PDC bit failure can be alleviated by optimizing drilling parameters. In addition, a new 4-DOF torsional model is established to simulate torsional impact drilling and the effect of torsional impact on PDC bit's stick-slip vibration is analyzed by use of an engineering example. It can be concluded that torsional impact can mitigate stick-slip vibration, prolonging the service life of PDC bit and improving drilling efficiency, which is consistent with the field experiment results.
1997-01-01
create a dependency tree containing an optimum set of n-1 first-order dependencies. To do this, first, we select an arbitrary bit Xroot to place at the...the root to an arbitrary bit Xroot -For all other bits Xi, set bestMatchingBitInTree[Xi] to Xroot . -While not all bits have been
Effects of plastic bits on the condition and behaviour of captive-reared pheasants.
Butler, D A; Davis, C
2010-03-27
Between 2005 and 2007, data were collected from game farms across England and Wales to examine the effects of the use of bits on the physiological condition and behaviour of pheasants. On each site, two pheasant pens kept in the same conditions were randomly allocated to either use bits or not. The behaviour and physiological conditions of pheasants in each treatment pen were assessed on the day of bitting and weekly thereafter until release. Detailed records of feed usage, medications and mortality were also kept. Bits halved the number of acts of bird-on-bird pecking, but they doubled the incidence of headshaking and scratching. Bits caused nostril inflammation and bill deformities in some birds, particularly after seven weeks of age. In all weeks after bitting, feather condition was poorer in non-bitted pheasants than in those fitted with bits. Less than 3 per cent of bitted birds had damaged skin, but in the non-bitted pens this figure increased over time to 23 per cent four weeks later. Feed use and mortality did not differ between bitted and non-bitted birds.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
Perceptually tuned low-bit-rate video codec for ATM networks
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien
1996-02-01
In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Design of a 0.13-μm CMOS cascade expandable ΣΔ modulator for multi-standard RF telecom systems
NASA Astrophysics Data System (ADS)
Morgado, Alonso; del Río, Rocío; de la Rosa, José M.
2007-05-01
This paper reports a 130-nm CMOS programmable cascade ΣΔ modulator for multi-standard wireless terminals, capable of operating on three standards: GSM, Bluetooth and UMTS. The modulator is reconfigured at both architecture- and circuit- level in order to adapt its performance to the different standards specifications with optimized power consumption. The design of the building blocks is based upon a top-down CAD methodology that combines simulation and statistical optimization at different levels of the system hierarchy. Transistor-level simulations show correct operation for all standards, featuring 13-bit, 11.3-bit and 9-bit effective resolution within 200-kHz, 1-MHz and 4-MHz bandwidth, respectively.
Optimal resource allocation for defense of targets based on differing measures of attractiveness.
Bier, Vicki M; Haphuriwat, Naraphorn; Menoyo, Jaime; Zimmerman, Rae; Culpen, Alison M
2008-06-01
This article describes the results of applying a rigorous computational model to the problem of the optimal defensive resource allocation among potential terrorist targets. In particular, our study explores how the optimal budget allocation depends on the cost effectiveness of security investments, the defender's valuations of the various targets, and the extent of the defender's uncertainty about the attacker's target valuations. We use expected property damage, expected fatalities, and two metrics of critical infrastructure (airports and bridges) as our measures of target attractiveness. Our results show that the cost effectiveness of security investment has a large impact on the optimal budget allocation. Also, different measures of target attractiveness yield different optimal budget allocations, emphasizing the importance of developing more realistic terrorist objective functions for use in budget allocation decisions for homeland security.
Fabrication of Fe-Based Diamond Composites by Pressureless Infiltration
Li, Meng; Sun, Youhong; Meng, Qingnan; Wu, Haidong; Gao, Ke; Liu, Baochang
2016-01-01
A metal-based matrix is usually used for the fabrication of diamond bits in order to achieve favorable properties and easy processing. In the effort to reduce the cost and to attain the desired bit properties, researchers have brought more attention to diamond composites. In this paper, Fe-based impregnated diamond composites for drill bits were fabricated by using a pressureless infiltration sintering method at 970 °C for 5 min. In addition, boron was introduced into Fe-based diamond composites. The influence of boron on the density, hardness, bending strength, grinding ratio, and microstructure was investigated. An Fe-based diamond composite with 1 wt % B has an optimal overall performance, the grinding ratio especially improving by 80%. After comparing with tungsten carbide (WC)-based diamond composites with and without 1 wt % B, results showed that the Fe-based diamond composite with 1 wt % B exhibits higher bending strength and wear resistance, being satisfactory to bit needs. PMID:28774124
Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI
NASA Astrophysics Data System (ADS)
Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.
2014-01-01
A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.
Optimality versus stability in water resource allocation.
Read, Laura; Madani, Kaveh; Inanloo, Bahareh
2014-01-15
Water allocation is a growing concern in a developing world where limited resources like fresh water are in greater demand by more parties. Negotiations over allocations often involve multiple groups with disparate social, economic, and political status and needs, who are seeking a management solution for a wide range of demands. Optimization techniques for identifying the Pareto-optimal (social planner solution) to multi-criteria multi-participant problems are commonly implemented, although often reaching agreement for this solution is difficult. In negotiations with multiple-decision makers, parties who base decisions on individual rationality may find the social planner solution to be unfair, thus creating a need to evaluate the willingness to cooperate and practicality of a cooperative allocation solution, i.e., the solution's stability. This paper suggests seeking solutions for multi-participant resource allocation problems through an economics-based power index allocation method. This method can inform on allocation schemes that quantify a party's willingness to participate in a negotiation rather than opt for no agreement. Through comparison of the suggested method with a range of distance-based multi-criteria decision making rules, namely, least squares, MAXIMIN, MINIMAX, and compromise programming, this paper shows that optimality and stability can produce different allocation solutions. The mismatch between the socially-optimal alternative and the most stable alternative can potentially result in parties leaving the negotiation as they may be too dissatisfied with their resource share. This finding has important policy implications as it justifies why stakeholders may not accept the socially optimal solution in practice, and underlies the necessity of considering stability where it may be more appropriate to give up an unstable Pareto-optimal solution for an inferior stable one. Authors suggest assessing the stability of an allocation solution as an additional component to an analysis that seeks to distribute water in a negotiated process. Copyright © 2013 Elsevier Ltd. All rights reserved.
Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Qianqian; Blohm, Andrew; Liu, Bo
A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoffmore » control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.« less
NASA Astrophysics Data System (ADS)
Zinke, Stephan
2017-02-01
Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.
NASA Technical Reports Server (NTRS)
Larman, B. T.
1981-01-01
The conduction of the Project Galileo Orbiter, with 18 microcomputers and the equivalent of 360K 8-bit bytes of memory contained within two major engineering subsystems and eight science instruments, requires that the key onboard computer system resources be managed in a very rigorous manner. Attention is given to the rationale behind the project policy, the development stage, the preliminary design stage, the design/implementation stage, and the optimization or 'scrubbing' stage. The implementation of the policy is discussed, taking into account the development of the Attitude and Articulation Control Subsystem (AACS) and the Command and Data Subsystem (CDS), the reporting of margin status, and the response to allocation oversubscription.
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
Attending Globally or Locally: Incidental Learning of Optimal Visual Attention Allocation
ERIC Educational Resources Information Center
Beck, Melissa R.; Goldstein, Rebecca R.; van Lamsweerde, Amanda E.; Ericson, Justin M.
2018-01-01
Attention allocation determines the information that is encoded into memory. Can participants learn to optimally allocate attention based on what types of information are most likely to change? The current study examined whether participants could incidentally learn that changes to either high spatial frequency (HSF) or low spatial frequency (LSF)…
Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method
NASA Astrophysics Data System (ADS)
Chen, Xiaomin; Wang, Gang
2017-05-01
The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnis Judzis
2006-03-01
Operators continue to look for ways to improve hard rock drilling performance through emerging technologies. A consortium of Department of Energy, operator and industry participants put together an effort to test and optimize mud driven fluid hammers as one emerging technology that has shown promise to increase penetration rates in hard rock. The thrust of this program has been to test and record the performance of fluid hammers in full scale test conditions including, hard formations at simulated depth, high density/high solids drilling muds, and realistic fluid power levels. This paper details the testing and results of testing two 7more » 3/4 inch diameter mud hammers with 8 1/2 inch hammer bits. A Novatek MHN5 and an SDS Digger FH185 mud hammer were tested with several bit types, with performance being compared to a conventional (IADC Code 537) tricone bit. These tools functionally operated in all of the simulated downhole environments. The performance was in the range of the baseline ticone or better at lower borehole pressures, but at higher borehole pressures the performance was in the lower range or below that of the baseline tricone bit. A new drilling mode was observed, while operating the MHN5 mud hammer. This mode was noticed as the weight on bit (WOB) was in transition from low to high applied load. During this new ''transition drilling mode'', performance was substantially improved and in some cases outperformed the tricone bit. Improvements were noted for the SDS tool while drilling with a more aggressive bit design. Future work includes the optimization of these or the next generation tools for operating in higher density and higher borehole pressure conditions and improving bit design and technology based on the knowledge gained from this test program.« less
NASA Astrophysics Data System (ADS)
Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.
2017-09-01
In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.
Power Allocation and Outage Probability Analysis for SDN-based Radio Access Networks
NASA Astrophysics Data System (ADS)
Zhao, Yongxu; Chen, Yueyun; Mai, Zhiyuan
2018-01-01
In this paper, performance of Access network Architecture based SDN (Software Defined Network) is analyzed with respect to the power allocation issue. A power allocation scheme PSO-PA (Particle Swarm Optimization-power allocation) algorithm is proposed, the proposed scheme is subjected to constant total power with the objective of minimizing system outage probability. The entire access network resource configuration is controlled by the SDN controller, then it sends the optimized power distribution factor to the base station source node (SN) and the relay node (RN). Simulation results show that the proposed scheme reduces the system outage probability at a low complexity.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei
2017-11-07
Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.
Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection
Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi
2011-01-01
The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
New PDC bit optimizes drilling performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besson, A.; Gudulec, P. le; Delwiche, R.
1996-05-01
The lithology in northwest Argentina contains a major section where polycrystalline diamond compact (PDC) bits have not succeeded in the past. The section consists of dense shales and cemented sandstone stringers with limestone laminations. Conventional PDC bits experienced premature failures in the section. A new generation PDC bit tripled rate of penetration (ROP) and increased by five times the potential footage per bit. Recent improvements in PDC bit technology that enabled the improved performance include: the ability to control the PDC cutter quality; use of an advanced cutter lay out defined by 3D software; using cutter face design code formore » optimized cleaning and cooling; and, mastering vibration reduction features, including spiraled blades.« less
Pan, Huapu; Assefa, Solomon; Green, William M J; Kuchta, Daniel M; Schow, Clint L; Rylyakov, Alexander V; Lee, Benjamin G; Baks, Christian W; Shank, Steven M; Vlasov, Yurii A
2012-07-30
The performance of a receiver based on a CMOS amplifier circuit designed with 90nm ground rules wire-bonded to a waveguide germanium photodetector is characterized at data rates up to 40Gbps. Both chips were fabricated through the IBM Silicon CMOS Integrated Nanophotonics process on specialty photonics-enabled SOI wafers. At the data rate of 28Gbps which is relevant to the new generation of optical interconnects, a sensitivity of -7.3dBm average optical power is demonstrated with 3.4pJ/bit power-efficiency and 0.6UI horizontal eye opening at a bit-error-rate of 10(-12). The receiver operates error-free (bit-error-rate < 10(-12)) up to 40Gbps with optimized power supply settings demonstrating an energy efficiency of 1.4pJ/bit and 4pJ/bit at data rates of 32Gbps and 40Gbps, respectively, with an average optical power of -0.8dBm.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
New scene change control scheme based on pseudoskipped picture
NASA Astrophysics Data System (ADS)
Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.
1997-01-01
A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.
Incentives for Optimal Multi-level Allocation of HIV Prevention Resources
Malvankar, Monali M.; Zaric, Gregory S.
2013-01-01
HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Adaptive bit plane quadtree-based block truncation coding for image compression
NASA Astrophysics Data System (ADS)
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme
NASA Astrophysics Data System (ADS)
He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo
2018-05-01
In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.
1990-01-01
the six fields will have two million cell locations. The table below shows the total allocation of 392 chips across fields and banks. To allow for...future growth, we allocate 16 wires for addressing both the rows and columns. eU 4 MBit locations bytes bits Chips (millions) (millions) (millions) per...sources apt to appear in most problems. If material parameters change during a run, then time must be allocated to read these constants into their
Restoration of Wavelet-Compressed Images and Motion Imagery
2004-01-01
SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION...images is that they are global translates of each other, where 29 the global motion parameters are known. In a very simple sense , these five images form...Image Proc., vol. 1, Oct. 2001, pp. 185–188. [2] J. W. Woods and T. Naveen, “A filter based bit allocation scheme for subband compresion of HDTV,” IEEE
QCDOC: A 10-teraflops scale computer for lattice QCD
NASA Astrophysics Data System (ADS)
Chen, D.; Christ, N. H.; Cristian, C.; Dong, Z.; Gara, A.; Garg, K.; Joo, B.; Kim, C.; Levkova, L.; Liao, X.; Mawhinney, R. D.; Ohta, S.; Wettig, T.
2001-03-01
The architecture of a new class of computers, optimized for lattice QCD calculations, is described. An individual node is based on a single integrated circuit containing a PowerPC 32-bit integer processor with a 1 Gflops 64-bit IEEE floating point unit, 4 Mbyte of memory, 8 Gbit/sec nearest-neighbor communications and additional control and diagnostic circuitry. The machine's name, QCDOC, derives from "QCD On a Chip".
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
Optimizing 4DCBCT projection allocation to respiratory bins.
O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J
2014-10-07
4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.
Analog Processor To Solve Optimization Problems
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.
1993-01-01
Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.
NASA Astrophysics Data System (ADS)
Perez, Santiago; Karakus, Murat; Pellet, Frederic
2017-05-01
The great success and widespread use of impregnated diamond (ID) bits are due to their self-sharpening mechanism, which consists of a constant renewal of diamonds acting at the cutting face as the bit wears out. It is therefore important to keep this mechanism acting throughout the lifespan of the bit. Nonetheless, such a mechanism can be altered by the blunting of the bit that ultimately leads to a less than optimal drilling performance. For this reason, this paper aims at investigating the applicability of artificial intelligence-based techniques in order to monitor tool condition of ID bits, i.e. sharp or blunt, under laboratory conditions. Accordingly, topologically invariant tests are carried out with sharp and blunt bits conditions while recording acoustic emissions (AE) and measuring-while-drilling variables. The combined output of acoustic emission root-mean-square value (AErms), depth of cut ( d), torque (tob) and weight-on-bit (wob) is then utilized to create two approaches in order to predict the wear state condition of the bits. One approach is based on the combination of the aforementioned variables and another on the specific energy of drilling. The two different approaches are assessed for classification performance with various pattern recognition algorithms, such as simple trees, support vector machines, k-nearest neighbour, boosted trees and artificial neural networks. In general, Acceptable pattern recognition rates were obtained, although the subset composed by AErms and tob excels due to the high classification performances rates and fewer input variables.
Efficient and robust quantum random number generation by photon number detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, M. J.; Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE; Thomas, O.
2015-08-17
We present an efficient and robust quantum random number generator based upon high-rate room temperature photon number detection. We employ an electric field-modulated silicon avalanche photodiode, a type of device particularly suited to high-rate photon number detection with excellent photon number resolution to detect, without an applied dead-time, up to 4 photons from the optical pulses emitted by a laser. By both measuring and modeling the response of the detector to the incident photons, we are able to determine the illumination conditions that achieve an optimal bit rate that we show is robust against variation in the photon flux. Wemore » extract random bits from the detected photon numbers with an efficiency of 99% corresponding to 1.97 bits per detected photon number yielding a bit rate of 143 Mbit/s, and verify that the extracted bits pass stringent statistical tests for randomness. Our scheme is highly scalable and has the potential of multi-Gbit/s bit rates.« less
NASA Astrophysics Data System (ADS)
Eyono Obono, S. D.; Basak, Sujit Kumar
2011-12-01
The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments
2012-06-14
least the minimum required for the guest OS and click “Next”. For 64-bit Windows 7 the minimum required is 2048 MB (Figure 66). Figure 66. Memory...prompted for Memory, allocate at least the minimum required for the guest OS, for 64-bit Windows 7 the minimum required is 2048 MB (Figure 79...130 21. Within the virtual disk creation wizard, select VDI for the file type (Figure 81). Figure 81. Select File Type 22. Select Dynamically
Hash Bit Selection for Nearest Neighbor Search.
Xianglong Liu; Junfeng He; Shih-Fu Chang
2017-11-01
To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.
NASA Astrophysics Data System (ADS)
Divakar, L.; Babel, M. S.; Perret, S. R.; Gupta, A. Das
2011-04-01
SummaryThe study develops a model for optimal bulk allocations of limited available water based on an economic criterion to competing use sectors such as agriculture, domestic, industry and hydropower. The model comprises a reservoir operation module (ROM) and a water allocation module (WAM). ROM determines the amount of water available for allocation, which is used as an input to WAM with an objective function to maximize the net economic benefits of bulk allocations to different use sectors. The total net benefit functions for agriculture and hydropower sectors and the marginal net benefit from domestic and industrial sectors are established and are categorically taken as fixed in the present study. The developed model is applied to the Chao Phraya basin in Thailand. The case study results indicate that the WAM can improve net economic returns compared to the current water allocation practices.
NASA Astrophysics Data System (ADS)
Yu, Sen; Lu, Hongwei
2018-04-01
Under the effects of global change, water crisis ranks as the top global risk in the future decade, and water conflict in transboundary river basins as well as the geostrategic competition led by it is most concerned. This study presents an innovative integrated PPMGWO model of water resources optimization allocation in a transboundary river basin, which is integrated through the projection pursuit model (PPM) and Grey wolf optimization (GWO) method. This study uses the Songhua River basin and 25 control units as examples, adopting the PPMGWO model proposed in this study to allocate the water quantity. Using water consumption in all control units in the Songhua River basin in 2015 as reference to compare with optimization allocation results of firefly algorithm (FA) and Particle Swarm Optimization (PSO) algorithms as well as the PPMGWO model, results indicate that the average difference between corresponding allocation results and reference values are 0.195 bil m3, 0.151 bil m3, and 0.085 bil m3, respectively. Obviously, the average difference of the PPMGWO model is the lowest and its optimization allocation result is closer to reality, which further confirms the reasonability, feasibility, and accuracy of the PPMGWO model. And then the PPMGWO model is adopted to simulate allocation of available water quantity in Songhua River basin in 2018, 2020, and 2030. The simulation results show water quantity which could be allocated in all controls demonstrates an overall increasing trend with reasonable and equal exploitation and utilization of water resources in the Songhua River basin in future. In addition, this study has a certain reference value and application meaning to comprehensive management and water resources allocation in other transboundary river basins.
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Advances in liver transplantation allocation systems.
Schilsky, Michael L; Moini, Maryam
2016-03-14
With the growing number of patients in need of liver transplantation, there is a need for adopting new and modifying existing allocation policies that prioritize patients for liver transplantation. Policy should ensure fair allocation that is reproducible and strongly predictive of best pre and post transplant outcomes while taking into account the natural history of the potential recipients liver disease and its complications. There is wide acceptance for allocation policies based on urgency in which the sickest patients on the waiting list with the highest risk of mortality receive priority. Model for end-stage liver disease and Child-Turcotte-Pugh scoring system, the two most universally applicable systems are used in urgency-based prioritization. However, other factors must be considered to achieve optimal allocation. Factors affecting pre-transplant patient survival and the quality of the donor organ also affect outcome. The optimal system should have allocation prioritization that accounts for both urgency and transplant outcome. We reviewed past and current liver allocation systems with the aim of generating further discussion about improvement of current policies.
Toward Large-Graph Comparison Measures to Understand Internet Topology Dynamics
2013-09-01
continuously from randomly selected vantage points in these monitors to destination IP addresses . From each IPv4 /24 prefix on the Internet, a destination is...expected to be more similar. This was verified when the esd and vsd measures applied to this dataset gave a low reading 5 An IPv4 address is a 32-bit...integer value. /24 is the prefix of the IPv4 network starting at a given address , having 24 bits allocated for the network prefix. 6 This utility
Heat-assisted magnetic recording of bit-patterned media beyond 10 Tb/in2
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk
2016-03-01
The limits of areal storage density that is achievable with heat-assisted magnetic recording are unknown. We addressed this central question and investigated the areal density of bit-patterned media. We analyzed the detailed switching behavior of a recording bit under various external conditions, allowing us to compute the bit error rate of a write process (shingled and conventional) for various grain spacings, write head positions, and write temperatures. Hence, we were able to optimize the areal density yielding values beyond 10 Tb/in2. Our model is based on the Landau-Lifshitz-Bloch equation and uses hard magnetic recording grains with a 5-nm diameter and 10-nm height. It assumes a realistic distribution of the Curie temperature of the underlying material, grain size, as well as grain and head position.
Optimal investment in a portfolio of HIV prevention programs.
Zaric, G S; Brandeau, M L
2001-01-01
In this article, the authors determine the optimal allocation of HIV prevention funds and investigate the impact of different allocation methods on health outcomes. The authors present a resource allocation model that can be used to determine the allocation of HIV prevention funds that maximizes quality-adjusted life years (or life years) gained or HIV infections averted in a population over a specified time horizon. They apply the model to determine the allocation of a limited budget among 3 types of HIV prevention programs in a population of injection drug users and nonusers: needle exchange programs, methadone maintenance treatment, and condom availability programs. For each prevention program, the authors estimate a production function that relates the amount invested to the associated change in risky behavior. The authors determine the optimal allocation of funds for both objective functions for a high-prevalence population and a low-prevalence population. They also consider the allocation of funds under several common rules of thumb that are used to allocate HIV prevention resources. It is shown that simpler allocation methods (e.g., allocation based on HIV incidence or notions of equity among population groups) may lead to alloctions that do not yield the maximum health benefit. The optimal allocation of HIV prevention funds in a population depends on HIV prevalence and incidence, the objective function, the production functions for the prevention programs, and other factors. Consideration of cost, equity, and social and political norms may be important when allocating HIV prevention funds. The model presented in this article can help decision makers determine the health consequences of different allocations of funds.
Methodology and method and apparatus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2011-01-01
Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.
Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bajaj, Ruchika; Bedi, Punam; Pal, S. K.
Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration.more » These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.« less
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
New PDC bit design reduces vibrational problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mensa-Wilmot, G.; Alexander, W.L.
1995-05-22
A new polycrystalline diamond compact (PDC) bit design combines cutter layout, load balancing, unsymmetrical blades and gauge pads, and spiraled blades to reduce problematic vibrations without limiting drilling efficiency. Stabilization improves drilling efficiency and also improves dull characteristics for PDC bits. Some PDC bit designs mitigate one vibrational mode (such as bit whirl) through drilling parameter manipulation yet cause or excite another vibrational mode (such as slip-stick). An alternative vibration-reducing concept which places no limitations on the operational environment of a PDC bit has been developed to ensure optimization of the bit`s available mechanical energy. The paper discusses bit stabilization,more » vibration reduction, vibration prevention, cutter arrangement, load balancing, blade layout, spiraled blades, and bit design.« less
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
NASA Astrophysics Data System (ADS)
Kota, Sriharsha; Patel, Jigesh; Ghillino, Enrico; Richards, Dwight
2011-01-01
In this paper, we demonstrate a computer model for simulating a dual-rate burst mode receiver that can readily distinguish bit rates of 1.25Gbit/s and 10.3Gbit/s and demodulate the data bursts with large power variations of above 5dB. To our knowledge, this is the first such model to demodulate data bursts of different bit rates without using any external control signal such as a reset signal or a bit rate select signal. The model is based on a burst-mode bit rate discrimination circuit (B-BDC) and makes use of a unique preamble sequence attached to each burst to separate out the data bursts with different bit rates. Here, the model is implemented using a combination of the optical system simulation suite OptSimTM, and the electrical simulation engine SPICE. The reaction time of the burst mode receiver model is about 7ns, which corresponds to less than 8 preamble bits for the bit rate of 1.25Gbps. We believe, having an accurate and robust simulation model for high speed burst mode transmission in GE-PON systems, is indispensable and tremendously speeds up the ongoing research in the area, saving a lot of time and effort involved in carrying out the laboratory experiments, while providing flexibility in the optimization of various system parameters for better performance of the receiver as a whole. Furthermore, we also study the effects of burst specifications like the length of preamble sequence, and other receiver design parameters on the reaction time of the receiver.
An adaptive P300-based online brain-computer interface.
Lenhardt, Alexander; Kaper, Matthias; Ritter, Helge J
2008-04-01
The P300 component of an event related potential is widely used in conjunction with brain-computer interfaces (BCIs) to translate the subjects intent by mere thoughts into commands to control artificial devices. A well known application is the spelling of words while selection of the letters is carried out by focusing attention to the target letter. In this paper, we present a P300-based online BCI which reaches very competitive performance in terms of information transfer rates. In addition, we propose an online method that optimizes information transfer rates and/or accuracies. This is achieved by an algorithm which dynamically limits the number of subtrial presentations, according to the subject's current online performance in real-time. We present results of two studies based on 19 different healthy subjects in total who participated in our experiments (seven subjects in the first and 12 subjects in the second one). In the first, study peak information transfer rates up to 92 bits/min with an accuracy of 100% were achieved by one subject with a mean of 32 bits/min at about 80% accuracy. The second experiment employed a dynamic classifier which enables the user to optimize bitrates and/or accuracies by limiting the number of subtrial presentations according to the current online performance of the subject. At the fastest setting, mean information transfer rates could be improved to 50.61 bits/min (i.e., 13.13 symbols/min). The most accurate results with 87.5% accuracy showed a transfer rate of 29.35 bits/min.
Nanoscale molecular communication networks: a game-theoretic perspective
NASA Astrophysics Data System (ADS)
Jiang, Chunxiao; Chen, Yan; Ray Liu, K. J.
2015-12-01
Currently, communication between nanomachines is an important topic for the development of novel devices. To implement a nanocommunication system, diffusion-based molecular communication is considered as a promising bio-inspired approach. Various technical issues about molecular communications, including channel capacity, noise and interference, and modulation and coding, have been studied in the literature, while the resource allocation problem among multiple nanomachines has not been well investigated, which is a very important issue since all the nanomachines share the same propagation medium. Considering the limited computation capability of nanomachines and the expensive information exchange cost among them, in this paper, we propose a game-theoretic framework for distributed resource allocation in nanoscale molecular communication systems. We first analyze the inter-symbol and inter-user interference, as well as bit error rate performance, in the molecular communication system. Based on the interference analysis, we formulate the resource allocation problem as a non-cooperative molecule emission control game, where the Nash equilibrium is found and proved to be unique. In order to improve the system efficiency while guaranteeing fairness, we further model the resource allocation problem using a cooperative game based on the Nash bargaining solution, which is proved to be proportionally fair. Simulation results show that the Nash bargaining solution can effectively ensure fairness among multiple nanomachines while achieving comparable social welfare performance with the centralized scheme.
Optimality Based Dynamic Plant Allocation Model: Predicting Acclimation Response to Climate Change
NASA Astrophysics Data System (ADS)
Srinivasan, V.; Drewry, D.; Kumar, P.; Sivapalan, M.
2009-12-01
Allocation of assimilated carbon to different plant parts determines the future plant status and is important to predict long term (months to years) vegetated land surface fluxes. Plants have the ability to modify their allometry and exhibit plasticity by varying the relative proportions of the structural biomass contained in each of its tissue. The ability of plants to be plastic provides them with the potential to acclimate to changing environmental conditions in order to enhance their probability of survival. Allometry based allocation models and other empirical allocation models do not account for plant plasticity cause by acclimation due to environmental changes. In the absence of a detailed understanding of the various biophysical processes involved in plant growth and development an optimality approach is adopted here to predict carbon allocation in plants. Existing optimality based models of plant growth are either static or involve considerable empiricism. In this work, we adopt an optimality based approach (coupled with limitations on plant plasticity) to predict the dynamic allocation of assimilated carbon to different plant parts. We explore the applicability of this approach using several optimization variables such as net primary productivity, net transpiration, realized growth rate, total end of growing season reproductive biomass etc. We use this approach to predict the dynamic nature of plant acclimation in its allocation of carbon to different plant parts under current and future climate scenarios. This approach is designed as a growth sub-model in the multi-layer canopy plant model (MLCPM) and is used to obtain land surface fluxes and plant properties over the growing season. The framework of this model is such that it retains the generality and can be applied to different types of ecosystems. We test this approach using the data from free air carbon dioxide enrichment (FACE) experiments using soybean crop at the Soy-FACE research site. Our results show that there are significant changes in the allocation patterns of vegetation when subjected to elevated CO2 indicating that our model is able to account for plant plasticity arising from acclimation. Soybeans when grown under elevated CO2, increased their allocation to structural components such as leaves and decreased their allocation to reproductive biomass. This demonstrates that plant acclimation causes lower than expected crop yields when grown under elevated CO2. Our findings can have serious implications in estimating future crop yields under climate change scenarios where it is widely expected that rising CO2 will fully offset losses due to climate change.
NASA Astrophysics Data System (ADS)
Xu, Ding; Li, Qun
2017-01-01
This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.
Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925
Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.
Huang, Hsin-Chan; Singh, Bismark; Morton, David P; Johnson, Gregory P; Clements, Bruce; Meyers, Lauren Ancel
2017-01-01
Vaccines are arguably the most important means of pandemic influenza mitigation. However, as during the 2009 H1N1 pandemic, mass immunization with an effective vaccine may not begin until a pandemic is well underway. In the U.S., state-level public health agencies are responsible for quickly and fairly allocating vaccines as they become available to populations prioritized to receive vaccines. Allocation decisions can be ethically and logistically complex, given several vaccine types in limited and uncertain supply and given competing priority groups with distinct risk profiles and vaccine acceptabilities. We introduce a model for optimizing statewide allocation of multiple vaccine types to multiple priority groups, maximizing equal access. We assume a large fraction of available vaccines are distributed to healthcare providers based on their requests, and then optimize county-level allocation of the remaining doses to achieve equity. We have applied the model to the state of Texas, and incorporated it in a Web-based decision-support tool for the Texas Department of State Health Services (DSHS). Based on vaccine quantities delivered to registered healthcare providers in response to their requests during the 2009 H1N1 pandemic, we find that a relatively small cache of discretionary doses (DSHS reserved 6.8% in 2009) suffices to achieve equity across all counties in Texas.
NASA Astrophysics Data System (ADS)
Grafton, R. Quentin; Chu, Hoang Long; Stewardson, Michael; Kompas, Tom
2011-12-01
A key challenge in managing semiarid basins, such as in the Murray-Darling in Australia, is to balance the trade-offs between the net benefits of allocating water for irrigated agriculture, and other uses, versus the costs of reduced surface flows for the environment. Typically, water planners do not have the tools to optimally and dynamically allocate water among competing uses. We address this problem by developing a general stochastic, dynamic programming model with four state variables (the drought status, the current weather, weather correlation, and current storage) and two controls (environmental release and irrigation allocation) to optimally allocate water between extractions and in situ uses. The model is calibrated to Australia's Murray River that generates: (1) a robust qualitative result that "pulse" or artificial flood events are an optimal way to deliver environmental flows over and above conveyance of base flows; (2) from 2001 to 2009 a water reallocation that would have given less to irrigated agriculture and more to environmental flows would have generated between half a billion and over 3 billion U.S. dollars in overall economic benefits; and (3) water markets increase optimal environmental releases by reducing the losses associated with reduced water diversions.
Artificial intelligent techniques for optimizing water allocation in a reservoir watershed
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
2014-05-01
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications
NASA Technical Reports Server (NTRS)
Carek, David A.
2005-01-01
This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.
Buehler, James W; Holtgrave, David R
2007-03-29
Controversy and debate can arise whenever public health agencies determine how program funds should be allocated among constituent jurisdictions. Two common strategies for making such allocations are expert review of competitive applications and the use of funding formulas. Despite widespread use of funding formulas by public health agencies in the United States, formula allocation strategies in public health have been subject to relatively little formal scrutiny, with the notable exception of the attention focused on formula funding of HIV care programs. To inform debates and deliberations in the selection of a formula-based approach, we summarize key challenges to formula-based funding, based on prior reviews of federal programs in the United States. The primary challenge lies in identifying data sources and formula calculation methods that both reflect and serve program objectives, with or without adjustments for variations in the cost of delivering services, the availability of local resources, capacity, or performance. Simplicity and transparency are major advantages of formula-based allocations, but these advantages can be offset if formula-based allocations are perceived to under- or over-fund some jurisdictions, which may result from how guaranteed minimum funding levels are set or from "hold-harmless" provisions intended to blunt the effects of changes in formula design or random variations in source data. While fairness is considered an advantage of formula-based allocations, the design of a formula may implicitly reflect unquestioned values concerning equity versus equivalence in setting funding policies. Whether or how past or projected trends are taken into account can also have substantial impacts on allocations. Insufficient attention has been focused on how the approach to designing funding formulas in public health should differ for treatment or service versus prevention programs. Further evaluations of formula-based versus competitive allocation methods are needed to promote the optimal use of public health funds. In the meantime, those who use formula-based strategies to allocate funds should be familiar with the nuances of this approach.
NASA Technical Reports Server (NTRS)
Baumert, L. D.; Mceliece, R. J.; Rodemich, E. R.; Rumsey, H., Jr.
1978-01-01
The design of an optimal merged keycode data base information retrieval system is detailed. A probability distribution of n-bit binary words that minimized false drops was developed for the case where the set of desired records was a subset of tagged records.
A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX
NASA Astrophysics Data System (ADS)
Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang
2010-07-01
In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.
A Parametric Study for the Design of an Optimized Ultrasonic Percussive Planetary Drill Tool.
Li, Xuan; Harkness, Patrick; Worrall, Kevin; Timoney, Ryan; Lucas, Margaret
2017-03-01
Traditional rotary drilling for planetary rock sampling, in situ analysis, and sample return are challenging because the axial force and holding torque requirements are not necessarily compatible with lightweight spacecraft architectures in low-gravity environments. This paper seeks to optimize an ultrasonic percussive drill tool to achieve rock penetration with lower reacted force requirements, with a strategic view toward building an ultrasonic planetary core drill (UPCD) device. The UPCD is a descendant of the ultrasonic/sonic driller/corer technique. In these concepts, a transducer and horn (typically resonant at around 20 kHz) are used to excite a toroidal free mass that oscillates chaotically between the horn tip and drill base at lower frequencies (generally between 10 Hz and 1 kHz). This creates a series of stress pulses that is transferred through the drill bit to the rock surface, and while the stress at the drill-bit tip/rock interface exceeds the compressive strength of the rock, it causes fractures that result in fragmentation of the rock. This facilitates augering and downward progress. In order to ensure that the drill-bit tip delivers the greatest effective impulse (the time integral of the drill-bit tip/rock pressure curve exceeding the strength of the rock), parameters such as the spring rates and the mass of the free mass, the drill bit and transducer have been varied and compared in both computer simulation and practical experiment. The most interesting findings and those of particular relevance to deep drilling indicate that increasing the mass of the drill bit has a limited (or even positive) influence on the rate of effective impulse delivered.
Land use allocation model considering climate change impact
NASA Astrophysics Data System (ADS)
Lee, D. K.; Yoon, E. J.; Song, Y. I.
2017-12-01
In Korea, climate change adaptation plans are being developed for each administrative district based on impact assessments constructed in various fields. This climate change impact assessments are superimposed on the actual space, which causes problems in land use allocation because the spatial distribution of individual impacts may be different each other. This implies that trade-offs between climate change impacts can occur depending on the composition of land use. Moreover, the actual space is complexly intertwined with various factors such as required area, legal regulations, and socioeconomic values, so land use allocation in consideration of climate change can be very difficult problem to solve (Liu et al. 2012; Porta et al. 2013).Optimization techniques can generate a sufficiently good alternatives for land use allocation at the strategic level if only the fitness function of relationship between impact and land use composition are derived. It has also been noted that land use optimization model is more effective than the scenario-based prediction model in achieving the objectives for problem solving (Zhang et al. 2014). Therefore in this study, we developed a quantitative tool, MOGA (Multi Objective Genetic Algorithm), which can generate a comprehensive land use allocations considering various climate change impacts, and apply it to the Gangwon-do in Korea. Genetic Algorithms (GAs) are the most popular optimization technique to address multi-objective in land use allocation. Also, it allows for immediate feedback to stake holders because it can run a number of experiments with different parameter values. And it is expected that land use decision makers and planners can formulate a detailed spatial plan or perform additional analysis based on the result of optimization model. Acknowledgments: This work was supported by the Korea Ministry of Environment (MOE) as "Climate Change Correspondence Program (Project number: 2014001310006)"
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Optimal allocation in annual plants and its implications for drought response
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Smith, Matthew; Purves, Drew
2015-04-01
The concept of plant optimality refers to the plastic behaviour of plants that results in lifetime and offspring fitness. Optimality concepts have been used in vegetation models for a variety of processes, including stomatal conductance, leaf phenology and biomass allocation. Including optimality in vegetation models has the advantages of creating process based models with a relatively low complexity in terms of parameter numbers but which are capable of reproducing complex plant behaviour. We present a general model of plant growth for annual plants based on the hypothesis that plants allocate biomass to aboveground and belowground vegetative organs in order to maintain an optimal C:N ratio. The model also represents reproductive growth through a second optimality criteria, which states that plants flower when they reach peak nitrogen uptake. We apply this model to wheat and maize crops at 15 locations corresponding to FLUXNET cropland sites. The model parameters are data constrained using a Bayesian fitting algorithm to eddy covariance data, satellite derived vegetation indices, specifically the MODIS fAPAR product and field level crop yield data. We use the model to simulate the plant drought response under the assumption of plant optimality and show that the plants maintain unstressed total biomass levels under drought for a reduction in precipitation of up to 40%. Beyond that level plant response stops being plastic and growth decreases sharply. This behaviour results simply from the optimal allocation criteria as the model includes no explicit drought sensitivity component. Models that use plant optimality concepts are a useful tool for simulation plant response to stress without the addition of artificial thresholds and parameters.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-01-01
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.
Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng
2017-07-04
In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.
Numerical simulation study on the optimization design of the crown shape of PDC drill bit.
Ju, Pei; Wang, Zhenquan; Zhai, Yinghu; Su, Dongyu; Zhang, Yunchi; Cao, Zhaohui
The design of bit crown is an important part of polycrystalline diamond compact (PDC) bit design, although predecessors have done a lot of researches on the design principles of PDC bit crown, the study of the law about rock-breaking energy consumption according to different bit crown shape is not very systematic, and the mathematical model of design is over-simplified. In order to analyze the relation between rock-breaking energy consumption and bit crown shape quantificationally, the paper puts forward an idea to take "per revolution-specific rock-breaking work" as objective function, and analyzes the relationship between rock properties, inner cone angle, outer cone arc radius, and per revolution-specific rock-breaking work by means of explicit dynamic finite element method. Results show that the change law between per revolution-specific rock-breaking work and the radius of gyration is similar for rocks with different properties, it is beneficial to decrease rock-breaking energy consumption by decreasing inner cone angle or outer cone arc radius. Of course, we should also consider hydraulic structure and processing technology in the optimization design of PDC bit crown.
An Optimization Model for the Allocation of University Based Merit Aid
ERIC Educational Resources Information Center
Sugrue, Paul K.
2010-01-01
The allocation of merit-based financial aid during the college admissions process presents postsecondary institutions with complex and financially expensive decisions. This article describes the application of linear programming as a decision tool in merit based financial aid decisions at a medium size private university. The objective defined for…
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-10-24
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-01-01
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2005-09-30
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.« less
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
NASA Astrophysics Data System (ADS)
Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki
2016-12-01
In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.
Site Selection and Resource Allocation of Oil Spill Emergency Base for Offshore Oil Facilities
NASA Astrophysics Data System (ADS)
Li, Yunbin; Liu, Jingxian; Wei, Lei; Wu, Weihuang
2018-02-01
Based on the analysis of the historical data about oil spill accidents in the Bohai Sea, this paper discretizes oil spilled source into a limited number of spill points. According to the probability of oil spill risk, the demand for salvage forces at each oil spill point is evaluated. Aiming at the specific location of the rescue base around the Bohai Sea, a cost-benefit analysis is conducted to determine the total cost of disasters for each rescue base. Based on the relationship between the oil spill point and the rescue site, a multi-objective optimization location model for the oil spill rescue base in the Bohai Sea region is established. And the genetic algorithm is used to solve the optimization problem, and determine the emergency rescue base optimization program and emergency resources allocation ratio.
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
She, Ji; Wang, Fei; Zhou, Jianjiang
2016-01-01
Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI) performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI) threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance. PMID:28009819
Analysis and Research on the Optimal Allocation of Regional Water Resources
NASA Astrophysics Data System (ADS)
rui-chao, Xi; yu-jie, Gu
2018-06-01
Starting from the basic concept of optimal allocation of water resources, taking the allocation of water resources in Tianjin as an example, the present situation of water resources in Tianjin is analyzed, and the multi-objective optimal allocation model of water resources is used to optimize the allocation of water resources. We use LINGO to solve the model, get the optimal allocation plan that meets the economic and social benefits, and put forward relevant policies and regulations, so as to provide theoretical which is basis for alleviating and solving the problem of water shortage.
Ramsey waits: allocating public health service resources when there is rationing by waiting.
Gravelle, Hugh; Siciliani, Luigi
2008-09-01
The optimal allocation of a public health care budget across treatments must take account of the way in which care is rationed within treatments since this will affect their marginal value. We investigate the optimal allocation rules for public health care systems where user charges are fixed and care is rationed by waiting. The optimal waiting time is higher for treatments with demands more elastic to waiting time, higher costs, lower charges, smaller marginal welfare loss from waiting by treated patients, and smaller marginal welfare losses from under-consumption of care. The results hold for a wide range of welfarist and non-welfarist objective functions and for systems in which there is also a private health care sector. They imply that allocation rules based purely on cost effectiveness ratios are suboptimal because they assume that there is no rationing within treatments.
A Ku band 5 bit MEMS phase shifter for active electronically steerable phased array applications
NASA Astrophysics Data System (ADS)
Sharma, Anesh K.; Gautam, Ashu K.; Farinelli, Paola; Dutta, Asudeb; Singh, S. G.
2015-03-01
The design, fabrication and measurement of a 5 bit Ku band MEMS phase shifter in different configurations, i.e. a coplanar waveguide and microstrip, are presented in this work. The development architecture is based on the hybrid approach of switched and loaded line topologies. All the switches are monolithically manufactured on a 200 µm high resistivity silicon substrate using 4 inch diameter wafers. The first three bits (180°, 90° and 45°) are realized using switched microstrip lines and series ohmic MEMS switches whereas the fourth and fifth bits (22.5° and 11.25°) consist of microstrip line sections loaded by shunt ohmic MEMS devices. Individual bits are fabricated and evaluated for performance and the monolithic device is a 5 bit Ku band (16-18 GHz) phase shifter with very low average insertion loss of the order of 3.3 dB and a return loss better than 15 dB over the 32 states with a chip area of 44 mm2. A total phase shift of 348.75° with phase accuracy within 3° is achieved over all of the states. The performance of individual bits has been optimized in order to achieve an integrated performance so that they can be implemented into active electronically steerable antennas for phased array applications.
Dodson, Zan M.; Agadjanian, Victor; Driessen, Julia
2016-01-01
Proper allocation of limited healthcare resources is a challenging task for policymakers in developing countries. Allocation of and access to these resources typically varies based on how need is defined, thus determining how individuals access and acquire healthcare. Using the introduction of antiretroviral therapy in southern Mozambique as an example, we examine alternative definitions of need for rural populations and how they might impact the allocation of this vital health service. Our results show that how need is defined matters when allocating limited healthcare resources and the use of need-based metrics can help ensure more optimal distribution of services. PMID:28596630
Deterministic switching of a magnetoelastic single-domain nano-ellipse using bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Cheng-Yen; Sepulveda, Abdon; Keller, Scott
2016-03-21
In this paper, a fully coupled analytical model between elastodynamics with micromagnetics is used to study the switching energies using voltage induced mechanical bending of a magnetoelastic bit. The bit consists of a single domain magnetoelastic nano-ellipse deposited on a thin film piezoelectric thin film (500 nm) attached to a thick substrate (0.5 mm) with patterned electrodes underneath the nano-dot. A voltage applied to the electrodes produces out of plane deformation with bending moments induced in the magnetoelastic bit modifying the magnetic anisotropy. To minimize the energy, two design stages are used. In the first stage, the geometry and bias field (H{submore » b}) of the bit are optimized to minimize the strain energy required to rotate between two stable states. In the second stage, the bit's geometry is fixed, and the electrode position and control mechanism is optimized. The electrical energy input is about 200 (aJ) which is approximately two orders of magnitude lower than spin transfer torque approaches.« less
Design of replica bit line control circuit to optimize power for SRAM
NASA Astrophysics Data System (ADS)
Pengjun, Wang; Keji, Zhou; Huihong, Zhang; Daohui, Gong
2016-12-01
A design of a replica bit line control circuit to optimize power for SRAM is proposed. The proposed design overcomes the limitations of the traditional replica bit line control circuit, which cannot shut off the word line in time. In the novel design, the delay of word line enable and disable paths are balanced. Thus, the word line can be opened and shut off in time. Moreover, the chip select signal is decomposed, which prevents feedback oscillations caused by the replica bit line and the replica word line. As a result, the switch power caused by unnecessary discharging of the bit line is reduced. A 2-kb SRAM is fully custom designed in an SMIC 65-nm CMOS process. The traditional replica bit line control circuit and the new replica bit line control circuit are used in the designed SRAM, and their performances are compared with each other. The experimental results show that at a supply voltage of 1.2 V, the switch power consumption of the memory array can be reduced by 53.7%. Project supported by the Zhejiang Provincial Natural Science Foundation of China (No. LQ14F040001), the National Natural Science Foundation of China (Nos. 61274132, 61234002, 61474068), and the K. C. Wong Magna Fund in Ningbo University.
Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth
Knoop, Henning; Bockmayr, Alexander; Steuer, Ralf
2017-01-01
Cyanobacteria are an integral part of Earth’s biogeochemical cycles and a promising resource for the synthesis of renewable bioproducts from atmospheric CO2. Growth and metabolism of cyanobacteria are inherently tied to the diurnal rhythm of light availability. As yet, however, insight into the stoichiometric and energetic constraints of cyanobacterial diurnal growth is limited. Here, we develop a computational framework to investigate the optimal allocation of cellular resources during diurnal phototrophic growth using a genome-scale metabolic reconstruction of the cyanobacterium Synechococcus elongatus PCC 7942. We formulate phototrophic growth as an autocatalytic process and solve the resulting time-dependent resource allocation problem using constraint-based analysis. Based on a narrow and well-defined set of parameters, our approach results in an ab initio prediction of growth properties over a full diurnal cycle. The computational model allows us to study the optimality of metabolite partitioning during diurnal growth. The cyclic pattern of glycogen accumulation, an emergent property of the model, has timing characteristics that are in qualitative agreement with experimental findings. The approach presented here provides insight into the time-dependent resource allocation problem of phototrophic diurnal growth and may serve as a general framework to assess the optimality of metabolic strategies that evolved in phototrophic organisms under diurnal conditions. PMID:28720699
NASA Astrophysics Data System (ADS)
Zhang, Yu; Jin, Lei; Jiang, Dandan; Zou, Xingqi; Zhao, Zhiguo; Gao, Jing; Zeng, Ming; Zhou, Wenbin; Tang, Zhaoyun; Huo, Zongliang
2018-03-01
In order to optimize program disturbance characteristics effectively, a characterization approach that measures top select transistor (TSG) leakage from bit-line is proposed to quantify TSG leakage under program inhibit condition in 3D NAND flash memory. Based on this approach, the effect of Vth modulation of two-cell TSG on leakage is evaluated. By checking the dependence of leakage and corresponding program disturbance on upper and lower TSG Vth, this approach is validated. The optimal Vth pattern with high upper TSG Vth and low lower TSG Vth has been suggested for low leakage current and high boosted channel potential. It is found that upper TSG plays dominant role in preventing drain induced barrier lowering (DIBL) leakage from boosted channel to bit-line, while lower TSG assists to further suppress TSG leakage by providing smooth potential drop from dummy WL to edge of TSG, consequently suppressing trap assisted band-to-band tunneling current (BTBT) between dummy WL and TSG.
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
NASA Astrophysics Data System (ADS)
Bao, Xiurong; Zhao, Qingchun; Yin, Hongxi; Qin, Jie
2018-05-01
In this paper, an all-optical parallel reservoir computing (RC) system with two channels for the optical packet header recognition is proposed and simulated, which is based on a semiconductor ring laser (SRL) with the characteristic of bidirectional light paths. The parallel optical loops are built through the cross-feedback of the bidirectional light paths where every optical loop can independently recognize each injected optical packet header. Two input signals are mapped and recognized simultaneously by training all-optical parallel reservoir, which is attributed to the nonlinear states in the laser. The recognition of optical packet headers for two channels from 4 bits to 32 bits is implemented through the simulation optimizing system parameters and therefore, the optimal recognition error ratio is 0. Since this structure can combine with the wavelength division multiplexing (WDM) optical packet switching network, the wavelength of each channel of optical packet headers for recognition can be different, and a better recognition result can be obtained.
NASA Astrophysics Data System (ADS)
Canright, David; Osvik, Dag Arne
We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement is 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.
An intelligent allocation algorithm for parallel processing
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.
1988-01-01
The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.
NASA Astrophysics Data System (ADS)
Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao
2017-12-01
Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.
Image compression software for the SOHO LASCO and EIT experiments
NASA Technical Reports Server (NTRS)
Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis
1994-01-01
This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Algorithms for synthesizing management solutions based on OLAP-technologies
NASA Astrophysics Data System (ADS)
Pishchukhin, A. M.; Akhmedyanova, G. F.
2018-05-01
OLAP technologies are a convenient means of analyzing large amounts of information. An attempt was made in their work to improve the synthesis of optimal management decisions. The developed algorithms allow forecasting the needs and accepted management decisions on the main types of the enterprise resources. Their advantage is the efficiency, based on the simplicity of quadratic functions and differential equations of only the first order. At the same time, the optimal redistribution of resources between different types of products from the assortment of the enterprise is carried out, and the optimal allocation of allocated resources in time. The proposed solutions can be placed on additional specially entered coordinates of the hypercube representing the data warehouse.
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.
Li, Bin-Bin; Wang, Ling
2007-06-01
This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.
Schweigkofler, U; Reimertz, C; Auhuber, T C; Jung, H G; Gottschalk, R; Hoffmann, R
2011-10-01
The outcome of injured patients depends on intrastractural circumstances as well as on the time until clinical treatment begins. A rapid patient allocation can only be achieved occur if informations about the care capacity status of the medical centers are available. Considering this an improvement at the interface prehospital/clinical care seems possible. In 2010 in Frankfurt am Main the announcement of free capacity (positive proof) was converted to a web-based negative proof of interdisciplinary care capacities. So-called closings are indicated in a web portal, recorded centrally and registered at the local health authority and the management of participating hospitals. Analyses of the allocations to hospitals of all professional disciplines from the years 2009 and 2010 showed an optimized use of the resources. A decline of the allocations by the order from 261 to 0 could be reached by the introduction of the clear care capacity proof system. The health authorities as the regulating body rarely had to intervene (decline from 400 to 7 cases). Surgical care in Frankfurt was guaranteed at any time by one of the large medical centers. The web-based care capacity proof system introduced in 2010 does justice to the demand for optimum resource use on-line. Integration of this allocation system into the developing trauma networks can optimize the process for a quick and high quality care of severely injured patients. It opens new approaches to improve allocation of high numbers of casualties in disaster medicine.
Efficient Simulation Budget Allocation for Selecting an Optimal Subset
NASA Technical Reports Server (NTRS)
Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay
2008-01-01
We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.
Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin
NASA Astrophysics Data System (ADS)
Wei, Y.; Tang, D.; Gao, H.; Ding, Y.
2015-12-01
Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-11-25
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.
Security of two-state and four-state practical quantum bit-commitment protocols
NASA Astrophysics Data System (ADS)
Loura, Ricardo; Arsenović, Dušan; Paunković, Nikola; Popović, Duška B.; Prvanović, Slobodan
2016-12-01
We study cheating strategies against a practical four-state quantum bit-commitment protocol [A. Danan and L. Vaidman, Quant. Info. Proc. 11, 769 (2012)], 10.1007/s11128-011-0284-4 and its two-state variant [R. Loura et al., Phys. Rev. A 89, 052336 (2014)], 10.1103/PhysRevA.89.052336 when the underlying quantum channels are noisy and the cheating party is constrained to using single-qubit measurements only. We show that simply inferring the transmitted photons' states by using the Breidbart basis, optimal for ambiguous (minimum-error) state discrimination, does not directly produce an optimal cheating strategy for this bit-commitment protocol. We introduce a strategy, based on certain postmeasurement processes and show it to have better chances at cheating than the direct approach. We also study to what extent sending forged geographical coordinates helps a dishonest party in breaking the binding security requirement. Finally, we investigate the impact of imperfect single-photon sources in the protocols. Our study shows that, in terms of the resources used, the four-state protocol is advantageous over the two-state version. The analysis performed can be straightforwardly generalized to any finite-qubit measurement, with the same qualitative results.
Bandwidth reduction for video-on-demand broadcasting using secondary content insertion
NASA Astrophysics Data System (ADS)
Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy
2005-01-01
An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039
Optimal allocation of resources for suppressing epidemic spreading on networks
NASA Astrophysics Data System (ADS)
Chen, Hanshuang; Li, Guofeng; Zhang, Haifeng; Hou, Zhonghuai
2017-07-01
Efficient allocation of limited medical resources is crucial for controlling epidemic spreading on networks. Based on the susceptible-infected-susceptible model, we solve the optimization problem of how best to allocate the limited resources so as to minimize prevalence, providing that the curing rate of each node is positively correlated to its medical resource. By quenched mean-field theory and heterogeneous mean-field (HMF) theory, we prove that an epidemic outbreak will be suppressed to the greatest extent if the curing rate of each node is directly proportional to its degree, under which the effective infection rate λ has a maximal threshold λcopt=1 /
Reactive Power Pricing Model Considering the Randomness of Wind Power Output
NASA Astrophysics Data System (ADS)
Dai, Zhong; Wu, Zhou
2018-01-01
With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.
Fast packet switching algorithms for dynamic resource control over ATM networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, R.P.; Keattihananant, P.; Chang, T.
1996-12-01
Real-time continuous media traffic, such as digital video and audio, is expected to comprise a large percentage of the network load on future high speed packet switch networks such as ATM. A major feature which distinguishes high speed networks from traditional slower speed networks is the large amount of data the network must process very quickly. For efficient network usage, traffic control mechanisms are essential. Currently, most mechanisms for traffic control (such as flow control) have centered on the support of Available Bit Rate (ABR), i.e., non real-time, traffic. With regard to ATM, for ABR traffic, two major types ofmore » schemes which have been proposed are rate- control and credit-control schemes. Neither of these schemes are directly applicable to Real-time Variable Bit Rate (VBR) traffic such as continuous media traffic. Traffic control for continuous media traffic is an inherently difficult problem due to the time- sensitive nature of the traffic and its unpredictable burstiness. In this study, we present a scheme which controls traffic by dynamically allocating/de- allocating resources among competing VCs based upon their real-time requirements. This scheme incorporates a form of rate- control, real-time burst-level scheduling and link-link flow control. We show analytically potential performance improvements of our rate- control scheme and present a scheme for buffer dimensioning. We also present simulation results of our schemes and discuss the tradeoffs inherent in maintaining high network utilization and statistically guaranteeing many users` Quality of Service.« less
A hybrid Jaya algorithm for reliability-redundancy allocation problems
NASA Astrophysics Data System (ADS)
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
2018-04-01
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
A compact presentation of DSN array telemetry performance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1982-01-01
The telemetry performance of an arrayed receiver system, including radio losses, is often given by a family of curves giving bit error rate vs bit SNR, with tracking loop SNR at one receiver held constant along each curve. This study shows how to process this information into a more compact, useful format in which the minimal total signal power and optimal carrier suppression, for a given fixed bit error rate, are plotted vs data rate. Examples for baseband-only combining are given. When appropriate dimensionless variables are used for plotting, receiver arrays with different numbers of antennas and different threshold tracking loop bandwidths look much alike, and a universal curve for optimal carrier suppression emerges.
Methods to ensure optimal off-bottom and drill bit distance under pellet impact drilling
NASA Astrophysics Data System (ADS)
Kovalyov, A. V.; Isaev, Ye D.; Vagapov, A. R.; Urnish, V. V.; Ulyanova, O. S.
2016-09-01
The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rock for various purposes. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The paper presents the survey of methods ensuring an optimal off-bottom and a drill bit distance. The analysis of methods shows that the issue is topical and requires further research.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems
NASA Astrophysics Data System (ADS)
Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki
Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.
Flexible operation strategy for environment control system in abnormal supply power condition
NASA Astrophysics Data System (ADS)
Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang
2017-04-01
This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel
Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.
2017-01-01
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.
Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo
2017-12-07
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.
NASA Astrophysics Data System (ADS)
Habibi Davijani, M.; Banihabib, M. E.; Nadjafzadeh Anvar, A.; Hashemi, S. R.
2016-02-01
In many discussions, work force is mentioned as the most important factor of production. Principally, work force is a factor which can compensate for the physical and material limitations and shortcomings of other factors to a large extent which can help increase the production level. On the other hand, employment is considered as an effective factor in social issues. The goal of the present research is the allocation of water resources so as to maximize the number of jobs created in the industry and agriculture sectors. An objective that has attracted the attention of policy makers involved in water supply and distribution is the maximization of the interests of beneficiaries and consumers in case of certain policies adopted. The present model applies the particle swarm optimization (PSO) algorithm in order to determine the optimum amount of water allocated to each water-demanding sector, area under cultivation, agricultural production, employment in the agriculture sector, industrial production and employment in the industry sector. Based on the results obtained from this research, by optimally allocating water resources in the central desert region of Iran, 1096 jobs can be created in the industry and agriculture sectors, which constitutes an improvement of about 13% relative to the previous situation (non-optimal water utilization). It is also worth mentioning that by optimizing the employment factor as a social parameter, the other areas such as the economic sector are influenced as well. For example, in this investigation, the resulting economic benefits (incomes) have improved from 73 billion Rials at baseline employment figures to 112 billion Rials in the case of optimized employment condition. Therefore, it is necessary to change the inter-sector and intra-sector water allocation models in this region, because this change not only leads to more jobs in this area, but also causes an improvement in the region's economic conditions.
Design of high-speed burst mode clock and data recovery IC for passive optical network
NASA Astrophysics Data System (ADS)
Yan, Minhui; Hong, Xiaobin; Huang, Wei-Ping; Hong, Jin
2005-09-01
Design of a high bit rate burst mode clock and data recovery (BMCDR) circuit for gigabit passive optical networks (GPON) is described. A top-down design flow is established and some of the key issues related to the behavioural level modeling are addressed in consideration for the complexity of the BMCDR integrated circuit (IC). Precise implementation of Simulink behavioural model accounting for the saturation of frequency control voltage is therefore developed for the BMCDR, and the parameters of the circuit blocks can be readily adjusted and optimized based on the behavioural model. The newly designed BMCDR utilizes the 0.18um standard CMOS technology and is shown to be capable of operating at bit rate of 2.5Gbps, as well as the recovery time of one bit period in our simulation. The developed behaviour model is verified by comparing with the detailed circuit simulation.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
NASA Astrophysics Data System (ADS)
Xiang, Yu; Tao, Cheng
2018-05-01
During the operation of the personal rapid transit system(PRT), the empty vehicle resources is distributed unevenly because of different passenger demand. In order to maintain the balance between supply and demand, and to meet the passenger needs of the ride, PRT empty vehicle resource allocation model is constructed based on the future demand forecasted by historical demand in this paper. The improved genetic algorithm is implied in distribution of the empty vehicle which can reduce the customers waiting time and improve the operation efficiency of the PRT system so that all passengers can take the PRT vehicles in the shortest time. The experimental result shows that the improved genetic algorithm can allocate the empty vehicle from the system level optimally, and realize the distribution of the empty vehicle resources reasonably in the system.
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Salous, Sana; Wang, Fei; Zhou, Jianjiang
2017-08-01
Distributed radar network systems have been shown to have many unique features. Due to their advantage of signal and spatial diversities, radar networks are attractive for target detection. In practice, the netted radars in radar networks are supposed to maximize their transmit power to achieve better detection performance, which may be in contradiction with low probability of intercept (LPI). Therefore, this paper investigates the problem of adaptive power allocation for radar networks in a cooperative game-theoretic framework such that the LPI performance can be improved. Taking into consideration both the transmit power constraints and the minimum signal to interference plus noise ratio (SINR) requirement of each radar, a cooperative Nash bargaining power allocation game based on LPI is formulated, whose objective is to minimize the total transmit power by optimizing the power allocation in radar networks. First, a novel SINR-based network utility function is defined and utilized as a metric to evaluate power allocation. Then, with the well-designed network utility function, the existence and uniqueness of the Nash bargaining solution are proved analytically. Finally, an iterative Nash bargaining algorithm is developed that converges quickly to a Pareto optimal equilibrium for the cooperative game. Numerical simulations and theoretic analysis are provided to evaluate the effectiveness of the proposed algorithm.
Bit selection using field drilling data and mathematical investigation
NASA Astrophysics Data System (ADS)
Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.
2018-03-01
A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
Optimized planning methodologies of ASON implementation
NASA Astrophysics Data System (ADS)
Zhou, Michael M.; Tamil, Lakshman S.
2005-02-01
Advanced network planning concerns effective network-resource allocation for dynamic and open business environment. Planning methodologies of ASON implementation based on qualitative analysis and mathematical modeling are presented in this paper. The methodology includes method of rationalizing technology and architecture, building network and nodal models, and developing dynamic programming for multi-period deployment. The multi-layered nodal architecture proposed here can accommodate various nodal configurations for a multi-plane optical network and the network modeling presented here computes the required network elements for optimizing resource allocation.
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
The use of an integrated variable fuzzy sets in water resources management
NASA Astrophysics Data System (ADS)
Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang
2018-06-01
Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
Device Centric Throughput and QoS Optimization for IoTsin a Smart Building Using CRN-Techniques
Aslam, Saleem; Hasan, Najam Ul; Shahid, Adnan; Jang, Ju Wook; Lee, Kyung-Geun
2016-01-01
The Internet of Things (IoT) has gained an incredible importance in the communication and networking industry due to its innovative solutions and advantages in diverse domains. The IoT’ network is a network of smart physical objects: devices, vehicles, buildings, etc. The IoT has a number of applications ranging from smart home, smart surveillance to smart healthcare systems. Since IoT consists of various heterogeneous devices that exhibit different traffic patterns and expect different quality of service (QoS) in terms of data rate, bit error rate and the stability index of the channel, therefore, in this paper, we formulated an optimization problem to assign channels to heterogeneous IoT devices within a smart building for the provisioning of their desired QoS. To solve this problem, a novel particle swarm optimization-based algorithm is proposed. Then, exhaustive simulations are carried out to evaluate the performance of the proposed algorithm. Simulation results demonstrate the supremacy of our proposed algorithm over the existing ones in terms of throughput, bit error rate and the stability index of the channel. PMID:27782057
NASA Astrophysics Data System (ADS)
Nguyen, Danh-Tuyen; Hoang, Tien-Dat; Lee, An-Chen
2017-10-01
A micro drill structure was optimized to give minimum lateral displacement at its drill tip, which plays an extremely important role on the quality of drilled holes. A drilling system includes a spindle, chuck and micro drill bit, which are modeled as rotating Timoshenko beam elements considering axial drilling force, torque, gyroscopic moments, eccentricity and bearing reaction force. Based on our previous work, the lateral vibration at the drill tip is evaluated. It is treated as an objective function in the optimization problem. Design variables are diameter and lengths of cylindrical and conical parts of the micro drill, along with nonlinear constraints on its mass and mass center location. Results showed that the lateral vibration was reduced by 15.83 % at a cutting speed of 70000 rpm as compared to that for a commercial UNION drill. Among the design variables, we found that the length of the conical part connecting to the drill shank plays the most important factor on the lateral vibration during cutting process.
Design and simulation of a 800 Mbit/s data link for magnetic resonance imaging wearables.
Vogt, Christian; Buthe, Lars; Petti, Luisa; Cantarella, Giuseppe; Munzenrieder, Niko; Daus, Alwin; Troster, Gerhard
2015-08-01
This paper presents the optimization of electronic circuitry for operation in the harsh electro magnetic (EM) environment during a magnetic resonance imaging (MRI) scan. As demonstrator, a device small enough to be worn during the scan is optimized. Based on finite element method (FEM) simulations, the induced current densities due to magnetic field changes of 200 T s(-1) were reduced from 1 × 10(10) A m(-2) by one order of magnitude, predicting error-free operation of the 1.8V logic employed. The simulations were validated using a bit error rate test, which showed no bit errors during a MRI scan sequence. Therefore, neither the logic, nor the utilized 800 Mbit s(-1) low voltage differential swing (LVDS) data link of the optimized wearable device were significantly influenced by the EM interference. Next, the influence of ferro-magnetic components on the static magnetic field and consequently the image quality was simulated showing a MRI image loss with approximately 2 cm radius around a commercial integrated circuit of 1×1 cm(2). This was successively validated by a conventional MRI scan.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Optimal resource allocation for novelty detection in a human auditory memory.
Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K
1996-11-04
A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.
Two-phase simulation-based location-allocation optimization of biomass storage distribution
USDA-ARS?s Scientific Manuscript database
This study presents a two-phase simulation-based framework for finding the optimal locations of biomass storage facilities that is a very critical link on the biomass supply chain, which can help to solve biorefinery concerns (e.g. steady supply, uniform feedstock properties, stable feedstock costs,...
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
NASA Astrophysics Data System (ADS)
Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.
2012-08-01
In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.
Kulkarni, Shruti R; Rajendran, Bipin
2018-07-01
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aswad, Z.A.R.; Al-Hadad, S.M.S.
1983-03-01
The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less
NASA Technical Reports Server (NTRS)
Craun, Robert W.; Acosta, Diana M.; Beard, Steven D.; Leonard, Michael W.; Hardy, Gordon H.; Weinstein, Michael; Yildiz, Yildiray
2013-01-01
This paper describes the maturation of a control allocation technique designed to assist pilots in the recovery from pilot induced oscillations (PIOs). The Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) is designed to enable next generation high efficiency aircraft designs. Energy efficient next generation aircraft require feedback control strategies that will enable lowering the actuator rate limit requirements for optimal airframe design. One of the common issues flying with actuator rate limits is PIOs caused by the phase lag between the pilot inputs and control surface response. CAPIO utilizes real-time optimization for control allocation to eliminate phase lag in the system caused by control surface rate limiting. System impacts of the control allocator were assessed through a piloted simulation evaluation of a non-linear aircraft simulation in the NASA Ames Vertical Motion Simulator. Results indicate that CAPIO helps reduce oscillatory behavior, including the severity and duration of PIOs, introduced by control surface rate limiting.
Electronic Photography at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack; Judge, Nancianne
1995-01-01
An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.
NASA Astrophysics Data System (ADS)
Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte
2017-04-01
Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.
NASA Technical Reports Server (NTRS)
Gern, Frank; Vicroy, Dan D.; Mulani, Sameer B.; Chhabra, Rupanshi; Kapania, Rakesh K.; Schetz, Joseph A.; Brown, Derrell; Princen, Norman H.
2014-01-01
Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling large arrays of control devices on innovative air vehicles. Artificial neutral networks are inspired by biological nervous systems and neurocomputing has successfully been applied to a variety of complex optimization problems. This project investigates the potential of applying neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts to minimize control power, hinge moments, and actuator forces, while keeping system weights within acceptable limits. The main objective of this project is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces. A Nastran aeroservoelastic finite element model is used to generate a learning database of hinge moment and actuation power characteristics for an array of flight conditions and control surface deflections. An artificial neural network incorporating a genetic algorithm then uses this training data to perform control allocation optimization for the investigated aircraft configuration. The phase I project showed that optimization results for the sum of required hinge moments are improved by more than 12% over the best Nastran solution by using the neural network optimization process.
Investigation of 16 × 10 Gbps DWDM System Based on Optimized Semiconductor Optical Amplifier
NASA Astrophysics Data System (ADS)
Rani, Aruna; Dewra, Sanjeev
2017-08-01
This paper investigates the performance of an optical system based on optimized semiconductor optical amplifier (SOA) at 160 Gbps with 0.8 nm channel spacing. Transmission distances up to 280 km at -30 dBm input signal power and up to 247 km at -32 dBm input signal power with acceptable bit error rate (BER) and Q-factor are examined. It is also analyzed that the transmission distance up to 292 km has been covered at -28 dBm input signal power using Dispersion Shifted (DS)-Normal fiber without any power compensation methods.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo
2018-06-01
The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-11-16
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-01-01
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342
Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks
Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi
2009-01-01
In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705
NASA Astrophysics Data System (ADS)
Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.
2018-03-01
Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.
Drilling systems for extraterrestrial subsurface exploration.
Zacny, K; Bar-Cohen, Y; Brennan, M; Briggs, G; Cooper, G; Davis, K; Dolgin, B; Glaser, D; Glass, B; Gorevan, S; Guerrero, J; McKay, C; Paulsen, G; Stanley, S; Stoker, C
2008-06-01
Drilling consists of 2 processes: breaking the formation with a bit and removing the drilled cuttings. In rotary drilling, rotational speed and weight on bit are used to control drilling, and the optimization of these parameters can markedly improve drilling performance. Although fluids are used for cuttings removal in terrestrial drilling, most planetary drilling systems conduct dry drilling with an auger. Chip removal via water-ice sublimation (when excavating water-ice-bound formations at pressure below the triple point of water) and pneumatic systems are also possible. Pneumatic systems use the gas or vaporization products of a high-density liquid brought from Earth, gas provided by an in situ compressor, or combustion products of a monopropellant. Drill bits can be divided into coring bits, which excavate an annular shaped hole, and full-faced bits. While cylindrical cores are generally superior as scientific samples, and coring drills have better performance characteristics, full-faced bits are simpler systems because the handling of a core requires a very complex robotic mechanism. The greatest constraints to extraterrestrial drilling are (1) the extreme environmental conditions, such as temperature, dust, and pressure; (2) the light-time communications delay, which necessitates highly autonomous systems; and (3) the mission and science constraints, such as mass and power budgets and the types of drilled samples needed for scientific analysis. A classification scheme based on drilling depth is proposed. Each of the 4 depth categories (surface drills, 1-meter class drills, 10-meter class drills, and deep drills) has distinct technological profiles and scientific ramifications.
Congestion Pricing for Aircraft Pushback Slot Allocation.
Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.
Congestion Pricing for Aircraft Pushback Slot Allocation
Zhang, Yaping
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429
NASA Astrophysics Data System (ADS)
Salido, Miguel A.; Rodriguez-Molins, Mario; Barber, Federico
The Container Stacking Problem and the Berth Allocation Problem are two important problems in maritime container terminal's management which are clearly related. Terminal operators normally demand all containers to be loaded into an incoming vessel should be ready and easily accessible in the terminal before vessel's arrival. Similarly, customers (i.e., vessel owners) expect prompt berthing of their vessels upon arrival. In this paper, we present an artificial intelligence based-integrated system to relate these problems. Firstly, we develop a metaheuristic algorithm for berth allocation which generates an optimized order of vessel to be served according to existing berth constraints. Secondly, we develop a domain-oriented heuristic planner for calculating the number of reshuffles needed to allocate containers in the appropriate place for a given berth ordering of vessels. By combining these optimized solutions, terminal operators can be assisted to decide the most appropriated solution in each particular case.
Decision-theoretic methodology for reliability and risk allocation in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
1985-01-01
This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less
Xie, Xiu-Fang; Hu, Yu-Kun; Pan, Xu; Liu, Feng-Hong; Song, Yao-Bin; Dong, Ming
2016-01-01
Resource allocation to different functions is central in life-history theory. Plasticity of functional traits allows clonal plants to regulate their resource allocation to meet changing environments. In this study, biomass allocation traits of clonal plants were categorized into absolute biomass for vegetative growth vs. for reproduction, and their relative ratios based on a data set including 115 species and derived from 139 published literatures. We examined general pattern of biomass allocation of clonal plants in response to availabilities of resource (e.g., light, nutrients, and water) using phylogenetic meta-analysis. We also tested whether the pattern differed among clonal organ types (stolon vs. rhizome). Overall, we found that stoloniferous plants were more sensitive to light intensity than rhizomatous plants, preferentially allocating biomass to vegetative growth, aboveground part and clonal reproduction under shaded conditions. Under nutrient- and water-poor condition, rhizomatous plants were constrained more by ontogeny than by resource availability, preferentially allocating biomass to belowground part. Biomass allocation between belowground and aboveground part of clonal plants generally supported the optimal allocation theory. No general pattern of trade-off was found between growth and reproduction, and neither between sexual and clonal reproduction. Using phylogenetic meta-analysis can avoid possible confounding effects of phylogeny on the results. Our results shown the optimal allocation theory explained a general trend, which the clonal plants are able to plastically regulate their biomass allocation, to cope with changing resource availability, at least in stoloniferous and rhizomatous plants. PMID:27200071
Chauvenet, Aliénor L M; Baxter, Peter W J; McDonald-Madden, Eve; Possingham, Hugh P
2010-04-01
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species.
NASA Astrophysics Data System (ADS)
Stefan Devlin, Benjamin; Nakura, Toru; Ikeda, Makoto; Asada, Kunihiro
We detail a self synchronous field programmable gate array (SSFPGA) with dual-pipeline (DP) architecture to conceal pre-charge time for dynamic logic, and its throughput optimization by using pipeline alignment implemented on benchmark circuits. A self synchronous LUT (SSLUT) consists of a three input tree-type structure with 8bits of SRAM for programming. A self synchronous switch box (SSSB) consists of both pass transistors and buffers to route signals, with 12bits of SRAM. One common block with one SSLUT and one SSSB occupies 2.2Mλ2 area with 35bits of SRAM, and the prototype SSFPGA with 34 × 30 (1020) blocks is designed and fabricated using 65nm CMOS. Measured results show at 1.2V 430MHz and 647MHz operation for a 3bit ripple carry adder, without and with throughput optimization, respectively. We find that using the proposed pipeline alignment techniques we can perform at maximum throughput of 647MHz in various benchmarks on the SSFPGA. We demonstrate up to 56.1 times throughput improvement with our pipeline alignment techniques. The pipeline alignment is carried out within the number of logic elements in the array and pipeline buffers in the switching matrix.
Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.
Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei
2017-09-01
Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.
Location-allocation models and new solution methodologies in telecommunication networks
NASA Astrophysics Data System (ADS)
Dinu, S.; Ciucur, V.
2016-08-01
When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.
Markov Processes in Image Processing
NASA Astrophysics Data System (ADS)
Petrov, E. P.; Kharina, N. L.
2018-05-01
Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.
Control mechanism of double-rotator-structure ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, SONG; Liping, YAN
2017-03-01
Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.
Optimal allocation of leaf epidermal area for gas exchange.
de Boer, Hugo J; Price, Charles A; Wagner-Cremer, Friederike; Dekker, Stefan C; Franks, Peter J; Veneklaas, Erik J
2016-06-01
A long-standing research focus in phytology has been to understand how plants allocate leaf epidermal space to stomata in order to achieve an economic balance between the plant's carbon needs and water use. Here, we present a quantitative theoretical framework to predict allometric relationships between morphological stomatal traits in relation to leaf gas exchange and the required allocation of epidermal area to stomata. Our theoretical framework was derived from first principles of diffusion and geometry based on the hypothesis that selection for higher anatomical maximum stomatal conductance (gsmax ) involves a trade-off to minimize the fraction of the epidermis that is allocated to stomata. Predicted allometric relationships between stomatal traits were tested with a comprehensive compilation of published and unpublished data on 1057 species from all major clades. In support of our theoretical framework, stomatal traits of this phylogenetically diverse sample reflect spatially optimal allometry that minimizes investment in the allocation of epidermal area when plants evolve towards higher gsmax . Our results specifically highlight that the stomatal morphology of angiosperms evolved along spatially optimal allometric relationships. We propose that the resulting wide range of viable stomatal trait combinations equips angiosperms with developmental and evolutionary flexibility in leaf gas exchange unrivalled by gymnosperms and pteridophytes. © 2016 The Authors New Phytologist © 2016 New Phytologist Trust.
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2018-04-13
Aiming to minimize the damage caused by river chemical spills, efficient emergency material allocation is critical for an actual emergency rescue decision-making in a quick response. In this study, an emergency material allocation framework based on time-varying supply-demand constraint is developed to allocate emergency material, minimize the emergency response time, and satisfy the dynamic emergency material requirements in post-accident phases dealing with river chemical spills. In this study, the theoretically critical emergency response time is firstly obtained for the emergency material allocation system to select a series of appropriate emergency material warehouses as potential supportive centers. Then, an enumeration method is applied to identify the practically critical emergency response time, the optimum emergency material allocation and replenishment scheme. Finally, the developed framework is applied to a computational experiment based on south-to-north water transfer project in China. The results illustrate that the proposed methodology is a simple and flexible tool for appropriately allocating emergency material to satisfy time-dynamic demands during emergency decision-making. Therefore, the decision-makers can identify an appropriate emergency material allocation scheme in a balance between time-effective and cost-effective objectives under the different emergency pollution conditions.
Locking classical correlations in quantum States.
DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M
2004-02-13
We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.
Quantum Associative Neural Network with Nonlinear Search Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Rigui; Wang, Huian; Wu, Qian; Shi, Yang
2012-03-01
Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to O( {log2}^{2^{n -t}} ) = O( n - t ) time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers' counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
Research on Evaluation of resource allocation efficiency of transportation system based on DEA
NASA Astrophysics Data System (ADS)
Zhang, Zhehui; Du, Linan
2017-06-01
In this paper, we select the time series data onto 1985-2015 years, construct the land (shoreline) resources, capital and labor as inputs. The index system of the output is freight volume and passenger volume, we use Quantitative analysis based on DEA method evaluated the resource allocation efficiency of railway, highway, water transport and civil aviation in China. Research shows that the resource allocation efficiency of various modes of transport has obvious difference, and the impact on scale efficiency is more significant. The most important two ways to optimize the allocation of resources to improve the efficiency of the combination of various modes of transport is promoting the co-ordination of various modes of transport and constructing integrated transportation system.
Topology-changing shape optimization with the genetic algorithm
NASA Astrophysics Data System (ADS)
Lamberson, Steven E., Jr.
The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.
Micropulsed Plasma Thrusters for Attitude Control of a Low-Earth-Orbiting CubeSat
NASA Technical Reports Server (NTRS)
Gatsonis, Nikolaos A.; Lu, Ye; Blandino, John; Demetriou, Michael A.; Paschalidis, Nicholas
2016-01-01
This study presents a 3-Unit CubeSat design with commercial-off-the-shelf hardware, Teflon-fueled micropulsed plasma thrusters, and an attitude determination and control approach. The micropulsed plasma thruster is sized by the impulse bit and pulse frequency required for continuous compensation of expected maximum disturbance torques at altitudes between 400 and 1000 km, as well as to perform stabilization of up to 20 deg /s and slew maneuvers of up to 180 deg. The study involves realistic power constraints anticipated on the 3-Unit CubeSat. Attitude estimation is implemented using the q method for static attitude determination of the quaternion using pairs of the spacecraft-sun and magnetic-field vectors. The quaternion estimate and the gyroscope measurements are used with an extended Kalman filter to obtain the attitude estimates. Proportional-derivative control algorithms use the static attitude estimates in order to calculate the torque required to compensate for the disturbance torques and to achieve specified stabilization and slewing maneuvers or combinations. The controller includes a thruster-allocation method, which determines the optimal utilization of the available thrusters and introduces redundancy in case of failure. Simulation results are presented for a 3-Unit CubeSat under detumbling, pointing, and pointing and spinning scenarios, as well as comparisons between the thruster-allocation and the paired-firing methods under thruster failure.
Programmable synaptic devices for electronic neural nets
NASA Technical Reports Server (NTRS)
Moopenn, A.; Thakoor, A. P.
1990-01-01
The architecture, design, and operational characteristics of custom VLSI and thin film synaptic devices are described. The devices include CMOS-based synaptic chips containing 1024 reprogrammable synapses with a 6-bit dynamic range, and nonvolatile, write-once, binary synaptic arrays based on memory switching in hydrogenated amorphous silicon films. Their suitability for embodiment of fully parallel and analog neural hardware is discussed. Specifically, a neural network solution to an assignment problem of combinatorial global optimization, implemented in fully parallel hardware using the synaptic chips, is described. The network's ability to provide optimal and near optimal solutions over a time scale of few neuron time constants has been demonstrated and suggests a speedup improvement of several orders of magnitude over conventional search methods.
Allocation of R&D Equipment Expenditure Based on Organisation Discipline Profiles
ERIC Educational Resources Information Center
Wells, Xanthe E.; Foster, Nigel; Finch, Adam; Elsum, Ian
2017-01-01
Sufficient and state-of-the-art research equipment is one component required to maintain the research competitiveness of a R&D organisation. This paper describes an approach to inform more optimal allocation of equipment expenditure levels in a large and diverse R&D organisation, such as CSIRO. CSIRO is Australia's national science agency,…
Self-optimization and auto-stabilization of receiver in DPSK transmission system.
Jang, Y S
2008-03-17
We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.
Peden, Al; Baker, Judith J
2002-01-01
Using the optimizing properties of econometric analysis, this study analyzes how physician overhead costs (OC) can be allocated to multiple activities to maximize precision in reimbursing the costs of services. Drawing on work by Leibenstein and Friedman, the analysis also shows that allocating OC to multiple activities unbiased by revenue requires controlling for revenue when making the estimates. Further econometric analysis shows that it is possible to save about 10 percent of OC by paying only for those that are necessary.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
Performance Evaluation Model for Application Layer Firewalls.
Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan
2016-01-01
Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.
Supply chain carbon footprinting and responsibility allocation under emission regulations.
Chen, Jin-Xiao; Chen, Jian
2017-03-01
Reduction of greenhouse gas emissions has become an enormous challenge for any single enterprise and its supply chain because of the increasing concern on global warming. This paper investigates carbon footprinting and responsibility allocation for supply chains involved in joint production. Our study is conducted from the perspective of a social planner who aims to achieve social value optimization. The carbon footprinting model is based on operational activities rather than on firms because joint production blurs the organizational boundaries of footprints. A general model is proposed for responsibility allocation among firms who seek to maximize individual profits. This study looks into ways for the decentralized supply chain to achieve centralized optimality of social value under two emission regulations. Given a balanced allocation for the entire supply chain, we examine the necessity of over-allocation to certain firms under specific situations and find opportunities for the firms to avoid over-allocation. The comparison of the two regulations reveals that setting an emission standard per unit of product will motivate firms to follow the standard and improve their emission efficiencies. Hence, a more efficient and promising policy is needed in contrast to existing regulations on total production. Copyright © 2016 Elsevier Ltd. All rights reserved.
Motion-related resource allocation in dynamic wireless visual sensor network environments.
Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E
2014-01-01
This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.
Areal density optimizations for heat-assisted magnetic recording of high-density media
NASA Astrophysics Data System (ADS)
Vogler, Christoph; Abert, Claas; Bruckner, Florian; Suess, Dieter; Praetorius, Dirk
2016-06-01
Heat-assisted magnetic recording (HAMR) is hoped to be the future recording technique for high-density storage devices. Nevertheless, there exist several realization strategies. With a coarse-grained Landau-Lifshitz-Bloch model, we investigate in detail the benefits and disadvantages of a continuous and pulsed laser spot recording of shingled and conventional bit-patterned media. Additionally, we compare single-phase grains and bits having a bilayer structure with graded Curie temperature, consisting of a hard magnetic layer with high TC and a soft magnetic one with low TC, respectively. To describe the whole write process as realistically as possible, a distribution of the grain sizes and Curie temperatures, a displacement jitter of the head, and the bit positions are considered. For all these cases, we calculate bit error rates of various grain patterns, temperatures, and write head positions to optimize the achievable areal storage density. Within our analysis, shingled HAMR with a continuous laser pulse moving over the medium reaches the best results and thus has the highest potential to become the next-generation storage device.
Gorahava, Kaushik K; Rosenberger, Jay M; Mubayi, Anuj
2015-07-01
Visceral leishmaniasis (VL) is the most deadly form of the leishmaniasis family of diseases, which affects numerous developing countries. The Indian state of Bihar has the highest prevalence and mortality rate of VL in the world. Insecticide spraying is believed to be an effective vector control program for controlling the spread of VL in Bihar; however, it is expensive and less effective if not implemented systematically. This study develops and analyzes a novel optimization model for VL control in Bihar that identifies an optimal (best possible) allocation of chosen insecticide (dichlorodiphenyltrichloroethane [DDT] or deltamethrin) based on the sizes of human and cattle populations in the region. The model maximizes the insecticide-induced sandfly death rate in human and cattle dwellings while staying within the current state budget for VL vector control efforts. The model results suggest that deltamethrin might not be a good replacement for DDT because the insecticide-induced sandfly deaths are 3.72 times more in case of DDT even after 90 days post spray. Different insecticide allocation strategies between the two types of sites (houses and cattle sheds) are suggested based on the state VL-control budget and have a direct implication on VL elimination efforts in a resource-limited region. © The American Society of Tropical Medicine and Hygiene.
2013-08-31
13.3 µs) used in the data frame. The preamble uses direct-sequence spread spectrum (DSSS) to reduce the negative impact of fading. Three bits... ifr t covers one of the allocated hops of index h i . The received reference symbol transmitted in hop 1,2,..., Hh i N is ( )h...are recovered based on the phase difference between the constellation points demodulated from the sub-band m of the information symbol ( ) ifr t
Optimal water resource allocation modelling in the Lowveld of Zimbabwe
NASA Astrophysics Data System (ADS)
Mhiribidi, Delight; Nobert, Joel; Gumindoga, Webster; Rwasoka, Donald T.
2018-05-01
The management and allocation of water from multi-reservoir systems is complex and thus requires dynamic modelling systems to achieve optimality. A multi-reservoir system in the Southern Lowveld of Zimbabwe is used for irrigation of sugarcane estates that produce sugar for both local and export consumption. The system is burdened with water allocation problems, made worse by decommissioning of dams. Thus the aim of this research was to develop an operating policy model for the Lowveld multi-reservoir system.The Mann Kendall Trend and Wilcoxon Signed-Rank tests were used to assess the variability of historic monthly rainfall and dam inflows for the period 1899-2015. The WEAP model was set up to evaluate the water allocation system of the catchment and come-up with a reference scenario for the 2015/2016 hydrologic year. Stochastic Dynamic Programming approach was used for optimisation of the multi-reservoirs releases.Results showed no significant trend in the rainfall but a significantly decreasing trend in inflows (p < 0.05). The water allocation model (WEAP) showed significant deficits ( ˜ 40 %) in irrigation water allocation in the reference scenario. The optimal rule curves for all the twelve months for each reservoir were obtained and considered to be a proper guideline for solving multi- reservoir management problems within the catchment. The rule curves are effective tools in guiding decision makers in the release of water without emptying the reservoirs but at the same time satisfying the demands based on the inflow, initial storage and end of month storage.
NASA Astrophysics Data System (ADS)
Caldararu, S.; Kern, M.; Engel, J.; Zaehle, S.
2016-12-01
Despite recent advances in global vegetation models, we still lack the capacity to predict observed vegetation responses to experimental environmental changes such as elevated CO2, increased temperature or nutrient additions. In particular for elevated CO2 (FACE) experiments, studies have shown that this is related in part to the models' inability to represent plastic changes in nutrient use and biomass allocation. We present a newly developed vegetation model which aims to overcome these problems by including optimality processes to describe nitrogen (N) and carbon allocation within the plant. We represent nitrogen allocation to the canopy and within the canopy between photosynthetic components as an optimal processes which aims to maximize net primary production (NPP) of the plant. We also represent biomass investment into aboveground and belowground components (root nitrogen uptake , biological N fixation) as an optimal process that maximizes plant growth by considering plant carbon and nutrient demands as well as acquisition costs. The model can now represent plastic changes in canopy N content and chlorophyll and Rubisco concentrations as well as in belowground allocation both on seasonal and inter-annual time scales. Specifically, we show that under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry would predicts a quick onset of N limitation. In general, our model aims to include physiologically-based plant processes and avoid arbitrarily imposed parameters and thresholds in order to improve our predictive capability of vegetation responses under changing environmental conditions.
Cascaded VLSI Chips Help Neural Network To Learn
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher; Thakoor, Anilkumar P.
1993-01-01
Cascading provides 12-bit resolution needed for learning. Using conventional silicon chip fabrication technology of VLSI, fully connected architecture consisting of 32 wide-range, variable gain, sigmoidal neurons along one diagonal and 7-bit resolution, electrically programmable, synaptic 32 x 31 weight matrix implemented on neuron-synapse chip. To increase weight nominally from 7 to 13 bits, synapses on chip individually cascaded with respective synapses on another 32 x 32 matrix chip with 7-bit resolution synapses only (without neurons). Cascade correlation algorithm varies number of layers effectively connected into network; adds hidden layers one at a time during learning process in such way as to optimize overall number of neurons and complexity and configuration of network.
NASA Astrophysics Data System (ADS)
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
Variable-rate optical communication through the turbulent atmosphere. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Levitt, B. K.
1971-01-01
It was demonstrated that the data transmitter can extract real time, channel state information by processing the field received when a pilot tone is sent from the data receiver to the data transmitter. Based on these channel measurements, optimal variable rate techniques were derived and significant improvements in system perforamnce were obtained, particularly at low bit error rates.
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2003-10-01
This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for amore » next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.« less
Mehrotra, Sanjay; Kim, Kibaek
2011-12-01
We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.
Stochastic Optimization For Water Resources Allocation
NASA Astrophysics Data System (ADS)
Yamout, G.; Hatfield, K.
2003-12-01
For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.
Optimal resource allocation strategy for two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu
2018-02-01
We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Constellation labeling optimization for bit-interleaved coded APSK
NASA Astrophysics Data System (ADS)
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Fog computing job scheduling optimization based on bees swarm
NASA Astrophysics Data System (ADS)
Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid
2018-04-01
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi
2011-12-01
Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.
1982-04-01
This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.
Twelve fundamental life histories evolving through allocation-dependent fecundity and survival.
Johansson, Jacob; Brännström, Åke; Metz, Johan A J; Dieckmann, Ulf
2018-03-01
An organism's life history is closely interlinked with its allocation of energy between growth and reproduction at different life stages. Theoretical models have established that diminishing returns from reproductive investment promote strategies with simultaneous investment into growth and reproduction (indeterminate growth) over strategies with distinct phases of growth and reproduction (determinate growth). We extend this traditional, binary classification by showing that allocation-dependent fecundity and mortality rates allow for a large diversity of optimal allocation schedules. By analyzing a model of organisms that allocate energy between growth and reproduction, we find twelve types of optimal allocation schedules, differing qualitatively in how reproductive allocation increases with body mass. These twelve optimal allocation schedules include types with different combinations of continuous and discontinuous increase in reproduction allocation, in which phases of continuous increase can be decelerating or accelerating. We furthermore investigate how this variation influences growth curves and the expected maximum life span and body size. Our study thus reveals new links between eco-physiological constraints and life-history evolution and underscores how allocation-dependent fitness components may underlie biological diversity.
Next generation PET data acquisition architectures
NASA Astrophysics Data System (ADS)
Jones, W. F.; Reed, J. H.; Everman, J. L.; Young, J. W.; Seese, R. D.
1997-06-01
New architectures for higher performance data acquisition in PET are proposed. Improvements are demanded primarily by three areas of advancing PET state of the art. First, larger detector arrays such as the Hammersmith ECAT/sup (R/) EXACT HR/sup ++/ exceed the addressing capacity of 32 bit coincidence event words. Second, better scintillators (LSO) make depth-of interaction (DOI) and time-of-flight (TOF) operation more practical. Third, fully optimized single photon attenuation correction requires higher rates of data collection. New technologies which enable the proposed third generation Real Time Sorter (RTS III) include: (1) 80 Mbyte/sec Fibre Channel RAID disk systems, (2) PowerPC on both VMEbus and PCI Local bus, and (3) quadruple interleaved DRAM controller designs. Data acquisition flexibility is enhanced through a wider 64 bit coincidence event word. PET methodology support includes DOI (6 bits), TOF (6 bits), multiple energy windows (6 bits), 512/spl times/512 sinogram indexes (18 bits), and 256 crystal rings (16 bits). Throughput of 10 M events/sec is expected for list-mode data collection as well as both on-line and replay histogramming. Fully efficient list-mode storage for each PET application is provided by real-time bit packing of only the active event word bits. Real-time circuits provide DOI rebinning.
Benedikt, Clemens; Kelly, Sherrie L; Wilson, David; Wilson, David P
2016-12-01
Estimated global new HIV infections among people who inject drugs (PWID) remained stable over the 2010-2015 period and the target of a 50% reduction over this period was missed. To achieve the 2020 UNAIDS target of reducing adult HIV infections by 75% compared to 2010, accelerated action in scaling up HIV programs for PWID is required. In a context of diminishing external support to HIV programs in countries where most HIV-affected PWID live, it is essential that available resources are allocated and used as efficiently as possible. Allocative and implementation efficiency analysis methods were applied. Optima, a dynamic, population-based HIV model with an integrated program and economic analysis framework was applied in eight countries in Eastern Europe and Central Asia (EECA). Mathematical analyses established optimized allocations of resources. An implementation efficiency analysis focused on examining technical efficiency, unit costs, and heterogeneity of service delivery models and practices. Findings from the latest reported data revealed that countries allocated between 4% (Bulgaria) and 40% (Georgia) of total HIV resources to programs targeting PWID - with a median of 13% for the eight countries. When distributing the same amount of HIV funding optimally, between 9% and 25% of available HIV resources would be allocated to PWID programs with a median allocation of 16% and, in addition, antiretroviral therapy would be scaled up including for PWID. As a result of optimized allocations, new HIV infections are projected to decline by 3-28% and AIDS-related deaths by 7-53% in the eight countries. Implementation efficiencies identified involve potential reductions in drug procurement costs, service delivery models, and practices and scale of service delivery influencing cost and outcome. A high level of implementation efficiency was associated with high volumes of PWID clients accessing a drug harm reduction facility. A combination of optimized allocation of resources, improved implementation efficiency and increased investment of non-HIV resources is required to enhance coverage and improve outcomes of programs for PWID. Increasing efficiency of HIV programs for PWID is a key step towards avoiding implicit rationing and ensuring transparent allocation of resources where and how they would have the largest impact on the health of PWID, and thereby ensuring that funding spent on PWID becomes a global best buy in public health. Copyright © 2016. Published by Elsevier B.V.
Optimization-based Approach to Cross-layer Resource Management in Wireless Networked Control Systems
2013-05-01
interest from both academia and industry [37], finding applications in un- manned robotic vehicles, automated highways and factories, smart homes and...is stable when the scaler varies slowly. The algorithm is further extended to utilize the slack resource in the network, which leads to the...model . . . . . . . . . . . . . . . . 66 Optimal sampling rate allocation formulation . . . . . 67 Price-based algorithm
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Programming scheme based optimization of hybrid 4T-2R OxRAM NVSRAM
NASA Astrophysics Data System (ADS)
Majumdar, Swatilekha; Kingra, Sandeep Kaur; Suri, Manan
2017-09-01
In this paper, we present a novel single-cycle programming scheme for 4T-2R NVSRAM, exploiting pulse engineered input signals. OxRAM devices based on 3 nm thick bi-layer active switching oxide and 90 nm CMOS technology node were used for all simulations. The cell design is implemented for real-time non-volatility rather than last-bit, or power-down non-volatility. Detailed analysis of the proposed single-cycle, parallel RRAM device programming scheme is presented in comparison to the two-cycle sequential RRAM programming used for similar 4T-2R NVSRAM bit-cells. The proposed single-cycle programming scheme coupled with the 4T-2R architecture leads to several benefits such as- possibility of unconventional transistor sizing, 50% lower latency, 20% improvement in SNM and ∼20× reduced energy requirements, when compared against two-cycle programming approach.
Digital Signal Processing For Low Bit Rate TV Image Codecs
NASA Astrophysics Data System (ADS)
Rao, K. R.
1987-06-01
In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.
Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin
2007-02-01
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.
An improved robust buffer allocation method for the project scheduling problem
NASA Astrophysics Data System (ADS)
Ghoddousi, Parviz; Ansari, Ramin; Makui, Ahmad
2017-04-01
Unpredictable uncertainties cause delays and additional costs for projects. Often, when using traditional approaches, the optimizing procedure of the baseline project plan fails and leads to delays. In this study, a two-stage multi-objective buffer allocation approach is applied for robust project scheduling. In the first stage, some decisions are made on buffer sizes and allocation to the project activities. A set of Pareto-optimal robust schedules is designed using the meta-heuristic non-dominated sorting genetic algorithm (NSGA-II) based on the decisions made in the buffer allocation step. In the second stage, the Pareto solutions are evaluated in terms of the deviation from the initial start time and due dates. The proposed approach was implemented on a real dam construction project. The outcomes indicated that the obtained buffered schedule reduces the cost of disruptions by 17.7% compared with the baseline plan, with an increase of about 0.3% in the project completion time.
Power Allocation Based on Data Classification in Wireless Sensor Networks
Wang, Houlian; Zhou, Gongbo
2017-01-01
Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
Attack allocation optimizations produce stability indices for unsymmetrical forces that indicate significant regions of both stability and instability and that have their minimum values roughly when the two sides have equal forces. This note derives combined stability indices for unsymmetrical offensive force configurations. The indices are based on optimal allocations of offensive missiles between vulnerable missiles and value based on the minimization of first strike cost, which is done analytically. Exchanges are modeled probabalistically and their results are converted into first and second strike costs through approximations to the damage to the value target sets held at risk. The stabilitymore » index is the product of the ratio of first to second strike costs seen by the two sides. Optimal allocations scale directly on the opponent`s vulnerable missiles, inversely on one`s own total weapons, and only logarithmically on the attacker`s damage preference, kill probability, and relative target set. The defender`s allocation scales in a similar manner on the attacker`s parameters. First and second strike magnitudes increase roughly linearly for the side with greater forces and decrease linearly for the side with fewer. Conversely, the first and second strike magnitudes decrease for the side with greater forces and increase for the side with fewer. These trends are derived and discussed analytically. The resulting stability indices exhibit a minimum where the two sides have roughly equal forces. If one side has much larger forces than the other, his costs drop to levels low enough that he is relatively insensitive to whether he strikes first or second. These calculations are performed with the analytic attack allocation appropriate for moderate forces, so some differences could be expected for the largest of the forces considered.« less
Low-power low-noise mixed-mode VLSI ASIC for infinite dynamic range imaging applications
NASA Astrophysics Data System (ADS)
Turchetta, Renato; Hu, Y.; Zinzius, Y.; Colledani, C.; Loge, A.
1998-11-01
Solid state solutions for imaging are mainly represented by CCDs and, more recently, by CMOS imagers. Both devices are based on the integration of the total charge generated by the impinging radiation, with no processing of the single photon information. The dynamic range of these devices is intrinsically limited by the finite value of noise. Here we present the design of an architecture which allows efficient, in-pixel, noise reduction to a practically zero level, thus allowing infinite dynamic range imaging. A detailed calculation of the dynamic range is worked out, showing that noise is efficiently suppressed. This architecture is based on the concept of single-photon counting. In each pixel, we integrate both the front-end, low-noise, low-power analog part and the digital part. The former consists of a charge preamplifier, an active filter for optimal noise bandwidth reduction, a buffer and a threshold comparator, and the latter is simply a counter, which can be programmed to act as a normal shift register for the readout of the counters' contents. Two different ASIC's based on this concept have been designed for different applications. The first one has been optimized for silicon edge-on microstrips detectors, used in a digital mammography R and D project. It is a 32-channel circuit, with a 16-bit binary static counter.It has been optimized for a relatively large detector capacitance of 5 pF. Noise has been measured to be equal to 100 + 7*Cd (pF) electron rms with the digital part, showing no degradation of the noise performances with respect to the design values. The power consumption is 3.8mW/channel for a peaking time of about 1 microsecond(s) . The second circuit is a prototype for pixel imaging. The total active area is about (250 micrometers )**2. The main differences of the electronic architecture with respect to the first prototype are: i) different optimization of the analog front-end part for low-capacitance detectors, ii) in- pixel 4-bit comparator-offset compensation, iii) 15-bit pseudo-random counter. The power consumption is 255 (mu) W/channel for a peaking time of 300 ns and an equivalent noise charge of 185 + 97*Cd electrons rms. Simulation and experimental result as well as imaging results will be presented.
VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems
NASA Technical Reports Server (NTRS)
Gee, T. H.; Geist, J. M.
1973-01-01
Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
The optimal allocation of space-based interceptors (SBIs) between fixed, heavy missiles and mobile singlets can be derived from approximate expressions for the boost-phase penetration of each. Singlets can cluster before launch and have shorter burn times, which reduce their availability to SBIs by an order of magnitude. Singlet penetration decreased slowly with the number of SBIs allocated to them; heavy missile penetration falls rapidly. The allocation to the heavy missiles falls linearly with their number. The penetration of heavy and singlet missiles is proportional to their numbers and inversely proportional to their availability. 8 refs., 2 figs.
NASA Astrophysics Data System (ADS)
Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang
2017-01-01
This paper investigates the revenue-neutral tradable credit charge and reward scheme without initial credit allocations that can reassign network traffic flow patterns to optimize congestion and emissions. First, we prove the existence of the proposed schemes and further decentralize the minimum emission flow pattern to user equilibrium. Moreover, we design the solving method of the proposed credit scheme for minimum emission problem. Second, we investigate the revenue-neutral tradable credit charge and reward scheme without initial credit allocations for bi-objectives to obtain the Pareto system optimum flow patterns of congestion and emissions; and present the corresponding solutions are located in the polyhedron constituted by some inequalities and equalities system. Last, numerical example based on a simple traffic network is adopted to obtain the proposed credit schemes and verify they are revenue-neutral.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Large-Scale Multiantenna Multisine Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Huang, Yang; Clerckx, Bruno
2017-11-01
Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.
Brownian motion properties of optoelectronic random bit generators based on laser chaos.
Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge
2016-07-11
The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.
Optimal allocation of HIV prevention funds for state health departments.
Yaylali, Emine; Farnham, Paul G; Cohen, Stacy; Purcell, David W; Hauck, Heather; Sansom, Stephanie L
2018-01-01
To estimate the optimal allocation of Centers for Disease Control and Prevention (CDC) HIV prevention funds for health departments in 52 jurisdictions, incorporating Health Resources and Services Administration (HRSA) Ryan White HIV/AIDS Program funds, to improve outcomes along the HIV care continuum and prevent infections. Using surveillance data from 2010 to 2012 and budgetary data from 2012, we divided the 52 health departments into 5 groups varying by number of persons living with diagnosed HIV (PLWDH), median annual CDC HIV prevention budget, and median annual HRSA expenditures supporting linkage to care, retention in care, and adherence to antiretroviral therapy. Using an optimization and a Bernoulli process model, we solved for the optimal CDC prevention budget allocation for each health department group. The optimal allocation distributed the funds across prevention interventions and populations at risk for HIV to prevent the greatest number of new HIV cases annually. Both the HIV prevention interventions funded by the optimal allocation of CDC HIV prevention funds and the proportions of the budget allocated were similar across health department groups, particularly those representing the large majority of PLWDH. Consistently funded interventions included testing, partner services and linkage to care and interventions for men who have sex with men (MSM). Sensitivity analyses showed that the optimal allocation shifted when there were differences in transmission category proportions and progress along the HIV care continuum. The robustness of the results suggests that most health departments can use these analyses to guide the investment of CDC HIV prevention funds into strategies to prevent the most new cases of HIV.
Optimal allocation of HIV prevention funds for state health departments
Farnham, Paul G.; Cohen, Stacy; Purcell, David W.; Hauck, Heather; Sansom, Stephanie L.
2018-01-01
Objective To estimate the optimal allocation of Centers for Disease Control and Prevention (CDC) HIV prevention funds for health departments in 52 jurisdictions, incorporating Health Resources and Services Administration (HRSA) Ryan White HIV/AIDS Program funds, to improve outcomes along the HIV care continuum and prevent infections. Methods Using surveillance data from 2010 to 2012 and budgetary data from 2012, we divided the 52 health departments into 5 groups varying by number of persons living with diagnosed HIV (PLWDH), median annual CDC HIV prevention budget, and median annual HRSA expenditures supporting linkage to care, retention in care, and adherence to antiretroviral therapy. Using an optimization and a Bernoulli process model, we solved for the optimal CDC prevention budget allocation for each health department group. The optimal allocation distributed the funds across prevention interventions and populations at risk for HIV to prevent the greatest number of new HIV cases annually. Results Both the HIV prevention interventions funded by the optimal allocation of CDC HIV prevention funds and the proportions of the budget allocated were similar across health department groups, particularly those representing the large majority of PLWDH. Consistently funded interventions included testing, partner services and linkage to care and interventions for men who have sex with men (MSM). Sensitivity analyses showed that the optimal allocation shifted when there were differences in transmission category proportions and progress along the HIV care continuum. Conclusion The robustness of the results suggests that most health departments can use these analyses to guide the investment of CDC HIV prevention funds into strategies to prevent the most new cases of HIV. PMID:29768489
How stimulation speed affects Event-Related Potentials and BCI performance.
Höhne, Johannes; Tangermann, Michael
2012-01-01
In most paradigms for Brain-Computer Interfaces (BCIs) that are based on Event-Related Potentials (ERPs), stimuli are presented with a pre-defined and constant speed. In order to boost BCI performance by optimizing the parameters of stimulation, this offline study investigates the impact of the stimulus onset asynchrony (SOA) on ERPs and the resulting classification accuracy. The SOA is defined as the time between the onsets of two consecutive stimuli, which represents a measure for stimulation speed. A simple auditory oddball paradigm was tested in 14 SOA conditions with a SOA between 50 ms and 1000 ms. Based on an offline ERP analysis, the BCI performance (quantified by the Information Transfer Rate, ITR in bits/min) was simulated. A great variability in the simulated BCI performance was observed within subjects (N=11). This indicates a potential increase in BCI performance (≥ 1.6 bits/min) for ERP-based paradigms, if the stimulation speed is specified for each user individually.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Community-aware task allocation for social networked multiagent systems.
Wang, Wanyuan; Jiang, Yichuan
2014-09-01
In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.
Multimedia transmission in MC-CDMA using adaptive subcarrier power allocation and CFO compensation
NASA Astrophysics Data System (ADS)
Chitra, S.; Kumaratharan, N.
2018-02-01
Multicarrier code division multiple access (MC-CDMA) system is one of the most effective techniques in fourth-generation (4G) wireless technology, due to its high data rate, high spectral efficiency and resistance to multipath fading. However, MC-CDMA systems are greatly deteriorated by carrier frequency offset (CFO) which is due to Doppler shift and oscillator instabilities. It leads to loss of orthogonality among the subcarriers and causes intercarrier interference (ICI). Water filling algorithm (WFA) is an efficient resource allocation algorithm to solve the power utilisation problems among the subcarriers in time-dispersive channels. The conventional WFA fails to consider the effect of CFO. To perform subcarrier power allocation with reduced CFO and to improve the capacity of MC-CDMA system, residual CFO compensated adaptive subcarrier power allocation algorithm is proposed in this paper. The proposed technique allocates power only to subcarriers with high channel to noise power ratio. The performance of the proposed method is evaluated using random binary data and image as source inputs. Simulation results depict that the bit error rate performance and ICI reduction capability of the proposed modified WFA offered superior performance in both power allocation and image compression for high-quality multimedia transmission in the presence of CFO and imperfect channel state information conditions.
Adaptive power allocation schemes based on IAFS algorithm for OFDM-based cognitive radio systems
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Zhao, Xiaohui; Liang, Cong; Ding, Xu
2017-01-01
In cognitive radio (CR) systems, reasonable power allocation can increase transmission rate of CR users or secondary users (SUs) as much as possible and at the same time insure normal communication among primary users (PUs). This study proposes an optimal power allocation scheme for the OFDM-based CR system with one SU influenced by multiple PU interference constraints. This scheme is based on an improved artificial fish swarm (IAFS) algorithm in combination with the advantage of conventional artificial fish swarm (ASF) algorithm and particle swarm optimisation (PSO) algorithm. In performance comparison of IAFS algorithm with other intelligent algorithms by simulations, the superiority of the IAFS algorithm is illustrated; this superiority results in better performance of our proposed scheme than that of the power allocation algorithms proposed by the previous studies in the same scenario. Furthermore, our proposed scheme can obtain higher transmission data rate under the multiple PU interference constraints and the total power constraint of SU than that of the other mentioned works.
Optimal manpower allocation in aircraft line maintenance (Case in GMF AeroAsia)
NASA Astrophysics Data System (ADS)
Puteri, V. E.; Yuniaristanto, Hisjam, M.
2017-11-01
This paper presents a mathematical modeling to find the optimal manpower allocation in an aircraft line maintenance. This research focuses on assigning the number and type of manpower that allocated to each service. This study considers the licenced worker or Aircraft Maintenance Engineer Licence (AMEL) and non licenced worker or Aircraft Maintenance Technician (AMT). In this paper, we also consider the relationship of each station in terms of the possibility to transfer the manpower among them. The optimization model considers the number of manpowers needed for each service and the requirement of AMEL worker. This paper aims to determine the optimal manpower allocation using the mathematical modeling. The objective function of the model is to find the minimum employee expenses. The model was solved using the ILOG CPLEX software. The results show that the manpower allocation can meet the manpower need and the all load can be served.
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
NASA Astrophysics Data System (ADS)
Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao
2018-01-01
During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.
NASA Astrophysics Data System (ADS)
Le Nir, Vincent; Moonen, Marc; Verlinden, Jan; Guenach, Mamoun
2009-02-01
Recently, the duality between Multiple Input Multiple Output (MIMO) Multiple Access Channels (MAC) and MIMO Broadcast Channels (BC) has been established under a total power constraint. The same set of rates for MAC can be achieved in BC exploiting the MAC-BC duality formulas while preserving the total power constraint. In this paper, we describe the BC optimal power allo- cation applying this duality in a downstream x-Digital Subscriber Lines (xDSL) context under a total power constraint for all modems over all tones. Then, a new algorithm called BC-Optimal Spectrum Balancing (BC-OSB) is devised for a more realistic power allocation under per-modem total power constraints. The capacity region of the primal BC problem under per-modem total power constraints is found by the dual optimization problem for the BC under per-modem total power constraints which can be rewritten as a dual optimization problem in the MAC by means of a precoder matrix based on the Lagrange multipliers. We show that the duality gap between the two problems is zero. The multi-user power allocation problem has been solved for interference channels and MAC using the OSB algorithm. In this paper we solve the problem of multi-user power allocation for the BC case using the OSB algorithm as well and we derive a computational efficient algorithm that will be referred to as BC-OSB. Simulation results are provided for two VDSL2 scenarios: the first one with Differential-Mode (DM) transmission only and the second one with both DM and Phantom- Mode (PM) transmissions.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Fully digital programmable optical frequency comb generation and application.
Yan, Xianglei; Zou, Xihua; Pan, Wei; Yan, Lianshan; Azaña, José
2018-01-15
We propose a fully digital programmable optical frequency comb (OFC) generation scheme based on binary phase-sampling modulation, wherein an optimized bit sequence is applied to phase modulate a narrow-linewidth light wave. Programming the bit sequence enables us to tune both the comb spacing and comb-line number (i.e., number of comb lines). The programmable OFCs are also characterized by ultra-flat spectral envelope, uniform temporal envelope, and stable bias-free setup. Target OFCs are digitally programmed to have 19, 39, 61, 81, 101, or 201 comb lines and to have a 100, 50, 20, 10, 5, or 1 MHz comb spacing. As a demonstration, a scanning-free temperature sensing system using a proposed OFC with 1001 comb lines was also implemented with a sensitivity of 0.89°C/MHz.
Control of Finite-State, Finite Memory Stochastic Systems
NASA Technical Reports Server (NTRS)
Sandell, Nils R.
1974-01-01
A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.
Jeankumar, Variam Ullas; Reshma, Rudraraju Srilakshmi; Vats, Rahul; Janupally, Renuka; Saxena, Shalini; Yogeeswari, Perumal; Sriram, Dharmarajan
2016-10-21
A structure based medium throughput virtual screening campaign of BITS-Pilani in house chemical library to identify novel binders of Mycobacterium tuberculosis gyrase ATPase domain led to the discovery of a quinoline scaffold. Further medicinal chemistry explorations on the right hand core of the early hit, engendered a potent lead demonstrating superior efficacy both in the enzyme and whole cell screening assay. The binding affinity shown at the enzyme level was further corroborated by biophysical characterization techniques. Early pharmacokinetic evaluation of the optimized analogue was encouraging and provides interesting potential for further optimization. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
A 16-bit Coherent Ising Machine for One-Dimensional Ring and Cubic Graph Problems
NASA Astrophysics Data System (ADS)
Takata, Kenta; Marandi, Alireza; Hamerly, Ryan; Haribara, Yoshitaka; Maruo, Daiki; Tamate, Shuhei; Sakaguchi, Hiromasa; Utsunomiya, Shoko; Yamamoto, Yoshihisa
2016-09-01
Many tasks in our modern life, such as planning an efficient travel, image processing and optimizing integrated circuit design, are modeled as complex combinatorial optimization problems with binary variables. Such problems can be mapped to finding a ground state of the Ising Hamiltonian, thus various physical systems have been studied to emulate and solve this Ising problem. Recently, networks of mutually injected optical oscillators, called coherent Ising machines, have been developed as promising solvers for the problem, benefiting from programmability, scalability and room temperature operation. Here, we report a 16-bit coherent Ising machine based on a network of time-division-multiplexed femtosecond degenerate optical parametric oscillators. The system experimentally gives more than 99.6% of success rates for one-dimensional Ising ring and nondeterministic polynomial-time (NP) hard instances. The experimental and numerical results indicate that gradual pumping of the network combined with multiple spectral and temporal modes of the femtosecond pulses can improve the computational performance of the Ising machine, offering a new path for tackling larger and more complex instances.
Design and testing of coring bits on drilling lunar rock simulant
NASA Astrophysics Data System (ADS)
Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo; Ma, Chao; Zhang, Hui; Qin, Hongwei; Deng, Zongquan
2017-02-01
Coring bits are widely utilized in the sampling of celestial bodies, and their drilling behaviors directly affect the sampling results and drilling security. This paper introduces a lunar regolith coring bit (LRCB), which is a key component of sampling tools for lunar rock breaking during the lunar soil sampling process. We establish the interaction model between the drill bit and rock at a small cutting depth, and the two main influential parameters (forward and outward rake angles) of LRCB on drilling loads are determined. We perform the parameter screening task of LRCB with the aim to minimize the weight on bit (WOB). We verify the drilling load performances of LRCB after optimization, and the higher penetrations per revolution (PPR) are, the larger drilling loads we gained. Besides, we perform lunar soil drilling simulations to estimate the efficiency on chip conveying and sample coring of LRCB. The results of the simulation and test are basically consistent on coring efficiency, and the chip removal efficiency of LRCB is slightly lower than HIT-H bit from simulation. This work proposes a method for the design of coring bits in subsequent extraterrestrial explorations.
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
Pricing Resources in LTE Networks through Multiobjective Optimization
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889
Pricing resources in LTE networks through multiobjective optimization.
Lai, Yung-Liang; Jiang, Jehn-Ruey
2014-01-01
The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.
NASA Astrophysics Data System (ADS)
Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku
2014-12-01
Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
NASA Astrophysics Data System (ADS)
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng
2007-08-02
An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less
NASA Astrophysics Data System (ADS)
Liu, Yan; Fan, Xi; Chen, Houpeng; Wang, Yueqing; Liu, Bo; Song, Zhitang; Feng, Songlin
2017-08-01
In this brief, multilevel data storage for phase-change memory (PCM) has attracted more attention in the memory market to implement high capacity memory system and reduce cost-per-bit. In this work, we present a universal programing method of SET stair-case current pulse in PCM cells, which can exploit the optimum programing scheme to achieve 2-bit/ 4state resistance-level with equal logarithm interval. SET stair-case waveform can be optimized by TCAD real time simulation to realize multilevel data storage efficiently in an arbitrary phase change material. Experimental results from 1 k-bit PCM test-chip have validated the proposed multilevel programing scheme. This multilevel programming scheme has improved the information storage density, robustness of resistance-level, energy efficient and avoiding process complexity.
Security bound of cheat sensitive quantum bit commitment.
He, Guang Ping
2015-03-23
Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.
A Goal Programming Optimization Model for The Allocation of Liquid Steel Production
NASA Astrophysics Data System (ADS)
Hapsari, S. N.; Rosyidi, C. N.
2018-03-01
This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.
NASA Astrophysics Data System (ADS)
Maity, H.; Biswas, A.; Bhattacharjee, A. K.; Pal, A.
In this paper, we have proposed the design of quantum cost (QC) optimized 4-bit reversible universal shift register (RUSR) using reduced number of reversible logic gates. The proposed design is very useful in quantum computing due to its low QC, less no. of reversible logic gate and less delay. The QC, no. of gates, garbage outputs (GOs) are respectively 64, 8 and 16 for proposed work. The improvement of proposed work is also presented. The QC is 5.88% to 70.9% improved, no. of gate is 60% to 83.33% improved with compared to latest reported result.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
Si photonics technology for future optical interconnection
NASA Astrophysics Data System (ADS)
Zheng, Xuezhe; Krishnamoorthy, Ashok V.
2011-12-01
Scaling of computing systems require ultra-efficient interconnects with large bandwidth density. Silicon photonics offers a disruptive solution with advantages in reach, energy efficiency and bandwidth density. We review our progress in developing building blocks for ultra-efficient WDM silicon photonic links. Employing microsolder based hybrid integration with low parasitics and high density, we optimize photonic devices on SOI platforms and VLSI circuits on more advanced bulk CMOS technology nodes independently. Progressively, we successfully demonstrated single channel hybrid silicon photonic transceivers at 5 Gbps and 10 Gbps, and 80 Gbps arrayed WDM silicon photonic transceiver using reverse biased depletion ring modulators and Ge waveguide photo detectors. Record-high energy efficiency of less than 100fJ/bit and 385 fJ/bit were achieved for the hybrid integrated transmitter and receiver, respectively. Waveguide grating based optical proximity couplers were developed with low loss and large optical bandwidth to enable multi-layer intra/inter-chip optical interconnects. Thermal engineering of WDM devices by selective substrate removal, together with WDM link using synthetic wavelength comb, we significantly improved the device tuning efficiency and reduced the tuning range. Using these innovative techniques, two orders of magnitude tuning power reduction was achieved. And tuning cost of only a few 10s of fJ/bit is expected for high data rate WDM silicon photonic links.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
NASA Astrophysics Data System (ADS)
Wei, J.; Wang, G.; Liu, R.
2008-12-01
The Tarim River Basin is the longest inland river in China. Due to water scarcity, ecologically-fragile is becoming a significant constraint to sustainable development in this region. To effectively manage the limited water resources for ecological purposes and for conventional water utilization purposes, a real-time water resources allocation Decision Support System (DSS) has been developed. Based on workflows of the water resources regulations and comprehensive analysis of the efficiency and feasibility of water management strategies, the DSS includes information systems that perform data acquisition, management and visualization, and model systems that perform hydrological forecast, water demand prediction, flow routing simulation and water resources optimization of the hydrological and water utilization process. An optimization and process control strategy is employed to dynamically allocate the water resources among the different stakeholders. The competitive targets and constraints are taken into considered by multi-objective optimization and with different priorities. The DSS of the Tarim River Basin has been developed and been successfully utilized to support the water resources management of the Tarim River Basin since 2005.
SECURITY MODELING FOR MARITIME PORT DEFENSE RESOURCE ALLOCATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Dunn, D.
2010-09-07
Redeployment of existing law enforcement resources and optimal use of geographic terrain are examined for countering the threat of a maritime based small-vessel radiological or nuclear attack. The evaluation was based on modeling conducted by the Savannah River National Laboratory that involved the development of options for defensive resource allocation that can reduce the risk of a maritime based radiological or nuclear threat. A diverse range of potential attack scenarios has been assessed. As a result of identifying vulnerable pathways, effective countermeasures can be deployed using current resources. The modeling involved the use of the Automated Vulnerability Evaluation for Risksmore » of Terrorism (AVERT{reg_sign}) software to conduct computer based simulation modeling. The models provided estimates for the probability of encountering an adversary based on allocated resources including response boats, patrol boats and helicopters over various environmental conditions including day, night, rough seas and various traffic flow rates.« less
Method for compression of binary data
Berlin, Gary J.
1996-01-01
The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamrick, Todd
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less
A trust-based sensor allocation algorithm in cooperative space search problems
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik
2011-06-01
Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.
Converse, Sarah J.; Shelley, Kevin J.; Morey, Steve; Chan, Jeffrey; LaTier, Andrea; Scafidi, Carolyn; Crouse, Deborah T.; Runge, Michael C.
2011-01-01
The resources available to support conservation work, whether time or money, are limited. Decision makers need methods to help them identify the optimal allocation of limited resources to meet conservation goals, and decision analysis is uniquely suited to assist with the development of such methods. In recent years, a number of case studies have been described that examine optimal conservation decisions under fiscal constraints; here we develop methods to look at other types of constraints, including limited staff and regulatory deadlines. In the US, Section Seven consultation, an important component of protection under the federal Endangered Species Act, requires that federal agencies overseeing projects consult with federal biologists to avoid jeopardizing species. A benefit of consultation is negotiation of project modifications that lessen impacts on species, so staff time allocated to consultation supports conservation. However, some offices have experienced declining staff, potentially reducing the efficacy of consultation. This is true of the US Fish and Wildlife Service's Washington Fish and Wildlife Office (WFWO) and its consultation work on federally-threatened bull trout (Salvelinus confluentus). To improve effectiveness, WFWO managers needed a tool to help allocate this work to maximize conservation benefits. We used a decision-analytic approach to score projects based on the value of staff time investment, and then identified an optimal decision rule for how scored projects would be allocated across bins, where projects in different bins received different time investments. We found that, given current staff, the optimal decision rule placed 80% of informal consultations (those where expected effects are beneficial, insignificant, or discountable) in a short bin where they would be completed without negotiating changes. The remaining 20% would be placed in a long bin, warranting an investment of seven days, including time for negotiation. For formal consultations (those where expected effects are significant), 82% of projects would be placed in a long bin, with an average time investment of 15. days. The WFWO is using this decision-support tool to help allocate staff time. Because workload allocation decisions are iterative, we describe a monitoring plan designed to increase the tool's efficacy over time. This work has general application beyond Section Seven consultation, in that it provides a framework for efficient investment of staff time in conservation when such time is limited and when regulatory deadlines prevent an unconstrained approach. ?? 2010.
Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon
2015-01-01
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. PMID:25954956
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
NASA Astrophysics Data System (ADS)
Ferrandiz, Ana; Scallan, Gavin
1995-10-01
The available bit rate (ABR) service allows connections to exceed their negotiated data rates during the life of the connections when excess capacity is available in the network. These connections are subject to flow control from the network in the event of network congestion. The ability to dynamically adjust the data rate of the connection can provide improved utilization of the network and be a valuable service to end users. ABR type service is therefore appropriate for the transmission of bursty LAN traffic over a wide area network in a manner that is more efficient and cost effective than allocating bandwdith at the peak cell rate. This paper describes the ABR service and discusses if it is realistic to operate a LAN like service over a wide area using ABR.
Experimental demonstration of spectrum-sliced elastic optical path network (SLICE).
Kozicki, Bartłomiej; Takara, Hidehiko; Tsukishima, Yukio; Yoshimatsu, Toshihide; Yonenaga, Kazushige; Jinno, Masahiko
2010-10-11
We describe experimental demonstration of spectrum-sliced elastic optical path network (SLICE) architecture. We employ optical orthogonal frequency-division multiplexing (OFDM) modulation format and bandwidth-variable optical cross-connects (OXC) to generate, transmit and receive optical paths with bandwidths of up to 1 Tb/s. We experimentally demonstrate elastic optical path setup and spectrally-efficient transmission of multiple channels with bit rates ranging from 40 to 140 Gb/s between six nodes of a mesh network. We show dynamic bandwidth scalability for optical paths with bit rates of 40 to 440 Gb/s. Moreover, we demonstrate multihop transmission of a 1 Tb/s optical path over 400 km of standard single-mode fiber (SMF). Finally, we investigate the filtering properties and the required guard band width for spectrally-efficient allocation of optical paths in SLICE.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
Equitable fund allocation, an economical approach for sustainable waste load allocation.
Ashtiani, Elham Feizi; Niksokhan, Mohammad Hossein; Jamshidi, Shervin
2015-08-01
This research aims to study a novel approach for waste load allocation (WLA) to meet environmental, economical, and equity objectives, simultaneously. For this purpose, based on a simulation-optimization model developed for Haraz River in north of Iran, the waste loads are allocated according to discharge permit market. The non-dominated solutions are initially achieved through multiobjective particle swarm optimization (MOPSO). Here, the violation of environmental standards based on dissolved oxygen (DO) versus biochemical oxidation demand (BOD) removal costs is minimized to find economical total maximum daily loads (TMDLs). This can save 41% in total abatement costs in comparison with the conventional command and control policy. The BOD discharge permit market then increases the revenues to 45%. This framework ensures that the environmental limits are fulfilled but the inequity index is rather high (about 4.65). For instance, the discharge permit buyer may not be satisfied about the equity of WLA. Consequently, it is recommended that a third party or institution should be in charge of reallocating the funds. It means that the polluters which gain benefits by unfair discharges should pay taxes (or funds) to compensate the losses of other polluters. This intends to reduce the costs below the required values of the lowest inequity index condition. These compensations of equitable fund allocation (EFA) may help to reduce the dissatisfactions and develop WLA policies. It is concluded that EFA in integration with water quality trading (WQT) is a promising approach to meet the objectives.
Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng
2016-01-01
With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865
Solving the optimal attention allocation problem in manual control
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
NASA Astrophysics Data System (ADS)
Pournazeri, S.
2011-12-01
A comprehensive optimization model named Cooperative Water Allocation Model (CWAM) is developed for equitable and efficient water allocation and valuation of Zab river basin in order to solve the draught problems of Orumieh Lake in North West of Iran. The model's methodology consists of three phases. The first represents an initial water rights allocation among competing users. The second comprises the water reallocation process for complete usage by consumers. The third phase performs an allocation of the net benefit of the stakeholders participating in a coalition by applying cooperative game theory. The environmental constraints are accounted for in the water allocation model by entering probable environmental damage in a target function, and inputting the minimum water requirement of users. The potential of underground water usage is evaluated in order to compensate for the variation in the amount of surface water. This is conducted by applying an integrated economic- hydrologic river basin model. A node-link river basin network is utilized in CWAM which consists of two major blocks. The first indicates the internal water rights allocation and the second is associated to water and net benefit reallocation. System control, loss in links by evaporation or seepage, modification of inflow into the node, loss in nodes and loss in outflow are considered in this model. Water valuation is calculated for environmental, industrial, municipal and agricultural usage by net benefit function. It can be seen that the water rights are allocated efficiently and incomes are distributed appropriately based on quality and quantity limitations.
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
NASA Astrophysics Data System (ADS)
Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan
2017-11-01
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.
Method for compression of binary data
Berlin, G.J.
1996-03-26
The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alan Black; Arnis Judzis
2004-10-01
The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for themore » high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.« less
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
NASA Astrophysics Data System (ADS)
Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun
2018-03-01
In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.
Designing Nanoscale Counter Using Reversible Gate Based on Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Moharrami, Elham; Navimipour, Nima Jafari
2018-04-01
Some new technologies such as Quantum-dot Cellular Automata (QCA) is suggested to solve the physical limits of the Complementary Metal-Oxide Semiconductor (CMOS) technology. The QCA as one of the novel technologies at nanoscale has potential applications in future computers. This technology has some advantages such as minimal size, high speed, low latency, and low power consumption. As a result, it is used for creating all varieties of memory. Counter circuits as one of the important circuits in the digital systems are composed of some latches, which are connected to each other in series and actually they count input pulses in the circuit. On the other hand, the reversible computations are very important because of their ability in reducing energy in nanometer circuits. Improving the energy efficiency, increasing the speed of nanometer circuits, increasing the portability of system, making smaller components of the circuit in a nuclear size and reducing the power consumption are considered as the usage of reversible logic. Therefore, this paper aims to design a two-bit reversible counter that is optimized on the basis of QCA using an improved reversible gate. The proposed reversible structure of 2-bit counter can be increased to 3-bit, 4-bit and more. The advantages of the proposed design have been shown using QCADesigner in terms of the delay in comparison with previous circuits.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
Optimizing the Remotely Piloted Aircraft Pilot Career Field
2011-10-01
Katana light aircraft trainers, receiving 30 to 38 hours of introductory, night, cross country and solo ...Power Journal 33, no. 2 (Summer 2009): 5-10. 51. Steve Lohr. "Software Progress Beats Moore’s Law." bits.blogs.nytimes.com. March 07, 2011. http...bits.blogs.nytimes.com/2011/03/07/software-progress- beats -moores-law/ 52. US Department of Defense. "United States Air Force Unmanned Aircraft
Optimization of a PCRAM Chip for high-speed read and highly reliable reset operations
NASA Astrophysics Data System (ADS)
Li, Xiaoyun; Chen, Houpeng; Li, Xi; Wang, Qian; Fan, Xi; Hu, Jiajun; Lei, Yu; Zhang, Qi; Tian, Zhen; Song, Zhitang
2016-10-01
The widely used traditional Flash memory suffers from its performance limits such as its serious crosstalk problems, and increasing complexity of floating gate scaling. Phase change random access memory (PCRAM) becomes one of the most potential nonvolatile memories among the new memory techniques. In this paper, a 1M-bit PCRAM chip is designed based on the SMIC 40nm CMOS technology. Focusing on the read and write performance, two new circuits with high-speed read operation and highly reliable reset operation are proposed. The high-speed read circuit effectively reduces the reading time from 74ns to 40ns. The double-mode reset circuit improves the chip yield. This 1M-bit PCRAM chip has been simulated on cadence. After layout design is completed, the chip will be taped out for post-test.
NASA Astrophysics Data System (ADS)
Song, Y.; Yao, Q.; Wang, G.; Yang, X.; Mayes, M. A.
2017-12-01
Increasing evidences is indicating that soil organic matter (SOM) decomposition and stabilization process is a continuum process and controlled by both microbial functions and their interaction with minerals (known as the microbial efficiency-matrix stabilization theory (MEMS)). Our metagenomics analysis of soil samples from both P-deficit and P-fertilization sites in Panama has demonstrated that community-level enzyme functions could adapt to maximize the acquisition of limiting nutrients and minimize energy demand for foraging (known as the optimal foraging theory). This optimization scheme can mitigate the imbalance of C/P ratio between soil substrate and microbial community and relieve the P limitation on microbial carbon use efficiency over the time. Dynamic allocation of multiple enzyme groups and their interaction with microbial/substrate stoichiometry has rarely been considered in biogeochemical models due to the difficulties in identifying microbial functional groups and quantifying the change in enzyme expression in response to soil nutrient availability. This study aims to represent the omics-informed optimal foraging theory in the Continuum Microbial ENzyme Decomposition model (CoMEND), which was developed to represent the continuum SOM decomposition process following the MEMS theory. The SOM pools in the model are classified based on soil chemical composition (i.e. Carbohydrates, lignin, N-rich SOM and P-rich SOM) and the degree of SOM depolymerization. The enzyme functional groups for decomposition of each SOM pool and N/P mineralization are identified by the relative composition of gene copy numbers. The responses of microbial activities and SOM decomposition to nutrient availability are simulated by optimizing the allocation of enzyme functional groups following the optimal foraging theory. The modeled dynamic enzyme allocation in response to P availability is evaluated by the metagenomics data measured from P addition and P-deficit soil samples in Panama sites.The implementation of dynamic enzyme allocation in response to nutrient availability in the CoMEND model enables us to capture the varying microbial C/P ratio and soil carbon dynamics in response to shifting nutrient constraints over time in tropical soils.
Steganography based on pixel intensity value decomposition
NASA Astrophysics Data System (ADS)
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
A Hybrid Multiuser Detector Based on MMSE and AFSA for TDRS System Forward Link
Yin, Zhendong; Liu, Xiaohui
2014-01-01
This study mainly focuses on multiuser detection in tracking and data relay satellite (TDRS) system forward link. Minimum mean square error (MMSE) is a low complexity multiuser detection method, but MMSE detector cannot achieve satisfactory bit error ratio and near-far resistance, whereas artificial fish swarm algorithm (AFSA) is expert in optimization and it can realize the global convergence efficiently. Therefore, a hybrid multiuser detector based on MMSE and AFSA (MMSE-AFSA) is proposed in this paper. The result of MMSE and its modified formations are used as the initial values of artificial fishes to accelerate the speed of global convergence and reduce the iteration times for AFSA. The simulation results show that the bit error ratio and near-far resistance performances of the proposed detector are much better, compared with MF, DEC, and MMSE, and are quite close to OMD. Furthermore, the proposed MMSE-AFSA detector also has a large system capacity. PMID:24883418
Abouleish, Amr E; Dexter, Franklin; Epstein, Richard H; Lubarsky, David A; Whitten, Charles W; Prough, Donald S
2003-04-01
Determination of operating room (OR) block allocation and case scheduling is often not based on maximizing OR efficiency, but rather on tradition and surgeon convenience. As a result, anesthesiology groups often incur additional labor costs. When negotiating financial support, heads of anesthesiology departments are often challenged to justify the subsidy necessary to offset these additional labor costs. In this study, we describe a method for calculating a statistically sound estimate of the excess labor costs incurred by an anesthesiology group because of inefficient OR allocation and case scheduling. OR information system and anesthesia staffing data for 1 yr were obtained from two university hospitals. Optimal OR allocation for each surgical service was determined by maximizing the efficiency of use of the OR staff. Hourly costs were converted to dollar amounts by using the nationwide median compensation for academic and private-practice anesthesia providers. Differences between actual costs and the optimal OR allocation were determined. For Hospital A, estimated annual excess labor costs were $1.6 million (95% confidence interval, $1.5-$1.7 million) and $2.0 million ($1.89-$2.05 million) when academic and private-practice compensation, respectively, was calculated. For Hospital B, excess labor costs were $1.0 million ($1.08-$1.17 million) and $1.4 million ($1.32-1.43 million) for academic and private-practice compensation, respectively. This study demonstrates a methodology for an anesthesiology group to estimate its excess labor costs. The group can then use these estimates when negotiating for subsidies with its hospital, medical school, or multispecialty medical group. We describe a new application for a previously reported statistical method to calculate operating room (OR) allocations to maximize OR efficiency. When optimal OR allocations and case scheduling are not implemented, the resulting increase in labor costs can be used in negotiations as a statistically sound estimate for the increased labor cost to the anesthesiology department.
Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Bodson, Marc
2012-01-01
Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.
Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture
NASA Astrophysics Data System (ADS)
Weber, Raphael; Rettberg, Achim
A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.
Optimal Resource Allocation in Library Systems
ERIC Educational Resources Information Center
Rouse, William B.
1975-01-01
Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
Adaptive limited feedback for interference alignment in MIMO interference channels.
Zhang, Yang; Zhao, Chenglin; Meng, Juan; Li, Shibao; Li, Li
2016-01-01
It is very important that the radar sensor network has autonomous capabilities such as self-managing, etc. Quite often, MIMO interference channels are applied to radar sensor networks, and for self-managing purpose, interference management in MIMO interference channels is critical. Interference alignment (IA) has the potential to dramatically improve system throughput by effectively mitigating interference in multi-user networks at high signal-to-noise (SNR). However, the implementation of IA predominantly relays on perfect and global channel state information (CSI) at all transceivers. A large amount of CSI has to be fed back to all transmitters, resulting in a proliferation of feedback bits. Thus, IA with limited feedback has been introduced to reduce the sum feedback overhead. In this paper, by exploiting the advantage of heterogeneous path loss, we first investigate the throughput of IA with limited feedback in interference channels while each user transmits multi-streams simultaneously, then we get the upper bound of sum rate in terms of the transmit power and feedback bits. Moreover, we propose a dynamic feedback scheme via bit allocation to reduce the throughput loss due to limited feedback. Simulation results demonstrate that the dynamic feedback scheme achieves better performance in terms of sum rate.
Payments for Ecosystem Services for watershed water resource allocations
NASA Astrophysics Data System (ADS)
Fu, Yicheng; Zhang, Jian; Zhang, Chunling; Zang, Wenbin; Guo, Wenxian; Qian, Zhan; Liu, Laisheng; Zhao, Jinyong; Feng, Jian
2018-01-01
Watershed water resource allocation focuses on concrete aspects of the sustainable management of Ecosystem Services (ES) that are related to water and examines the possibility of implementing Payment for Ecosystem Services (PES) for water ES. PES can be executed to satisfy both economic and environmental objectives and demands. Considering the importance of calculating PES schemes at the social equity and cooperative game (CG) levels, to quantitatively solve multi-objective problems, a water resources allocation model and multi-objective optimization are provided. The model consists of three modules that address the following processes: ① social equity mechanisms used to study water consumer associations, ② an optimal decision-making process based on variable intervals and CG theory, and ③ the use of Shapley values of CGs for profit maximization. The effectiveness of the proposed methodology for realizing sustainable development was examined. First, an optimization model with water allocation objective was developed based on sustainable water resources allocation framework that maximizes the net benefit of water use. Then, to meet water quality requirements, PES cost was estimated using trade-off curves among different pollution emission concentration permissions. Finally, to achieve equity and supply sufficient incentives for water resources protection, CG theory approaches were utilized to reallocate PES benefits. The potential of the developed model was examined by its application to a case study in the Yongding River watershed of China. Approximately 128 Mm3 of water flowed from the upper reach (Shanxi and Hebei Provinces) sections of the Yongding River to the lower reach (Beijing) in 2013. According to the calculated results, Beijing should pay USD6.31 M (¥39.03 M) for water-related ES to Shanxi and Hebei Provinces. The results reveal that the proposed methodology is an available tool that can be used for sustainable development with resolving PES amounts among different regions under social and environmental constraints by considering the characteristics of social equity and CGs.
USDA-ARS?s Scientific Manuscript database
An improved ant colony optimization (ACO) formulation for the allocation of crops and water to different irrigation areas is developed. The formulation enables dynamic adjustment of decision variable options and makes use of visibility factors (VFs, the domain knowledge that can be used to identify ...
NASA Astrophysics Data System (ADS)
Allam, M.; Eltahir, E. A. B.
2017-12-01
Rapid population growth, hunger problems, increasing energy demands, persistent conflicts between the Nile basin riparian countries and the potential impacts of climate change highlight the urgent need for the conscious stewardship of the upper Blue Nile (UBN) basin resources. This study develops a framework for the optimal allocation of land and water resources to agriculture and hydropower production in the UBN basin. The framework consists of three optimization models that aim to: (a) provide accurate estimates of the basin water budget, (b) allocate land and water resources optimally to agriculture, and (c) allocate water to agriculture and hydropower production, and investigate trade-offs between them. First, a data assimilation procedure for data-scarce basins is proposed to deal with data limitations and produce estimates of the hydrologic components that are consistent with the principles of mass and energy conservation. Second, the most representative topography and soil properties datasets are objectively identified and used to delineate the agricultural potential in the basin. The agricultural potential is incorporated into a land-water allocation model that maximizes the net economic benefits from rain-fed agriculture while allowing for enhancing the soils from one suitability class to another to increase agricultural productivity in return for an investment in soil inputs. The optimal agricultural expansion is expected to reduce the basin flow by 7.6 cubic kilometres, impacting downstream countries. The optimization framework is expanded to include hydropower production. This study finds that allocating water to grow rain-fed teff in the basin is more profitable than allocating water for hydropower production. Optimal operation rules for the Grand Ethiopian Renaissance dam (GERD) are identified to maximize annual hydropower generation while achieving a relatively uniform monthly production rate. Trade-offs between agricultural expansion and hydropower generation are analysed in an attempt to define cooperation scenarios that would achieve win-win outcomes for all riparian countries.
Cross-Layer Resource Allocation for Wireless Visual Sensor Networks and Mobile Ad Hoc Networks
2014-10-01
MMD), minimizes the maximum dis- tortion among all nodes of the network, promoting a rather unbiased treatment of the nodes. We employed the Particle...achieve the ideal tradeoff between the transmitted video quality and energy consumption. Each sensor node has a bit rate that can be used for both...Distortion (MMD), minimizes the maximum distortion among all nodes of the network, promoting a rather unbiased treatment of the nodes. For both criteria
Distributed multiport memory architecture
NASA Technical Reports Server (NTRS)
Kohl, W. H. (Inventor)
1983-01-01
A multiport memory architecture is diclosed for each of a plurality of task centers connected to a command and data bus. Each task center, includes a memory and a plurality of devices which request direct memory access as needed. The memory includes an internal data bus and an internal address bus to which the devices are connected, and direct timing and control logic comprised of a 10-state ring counter for allocating memory devices by enabling AND gates connected to the request signal lines of the devices. The outputs of AND gates connected to the same device are combined by OR gates to form an acknowledgement signal that enables the devices to address the memory during the next clock period. The length of the ring counter may be effectively lengthened to any multiple of ten to allow for more direct memory access intervals in one repetitive sequence. One device is a network bus adapter which serially shifts onto the command and data bus, a data word (8 bits plus control and parity bits) during the next ten direct memory access intervals after it has been granted access. The NBA is therefore allocated only one access in every ten intervals, which is a predetermined interval for all centers. The ring counters of all centers are periodically synchronized by DMA SYNC signal to assure that all NBAs be able to function in synchronism for data transfer from one center to another.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamoureux, Louis-Philippe; Navez, Patrick; Cerf, Nicolas J.
It is shown that any quantum operation that perfectly clones the entanglement of all maximally entangled qubit pairs cannot preserve separability. This 'entanglement no-cloning' principle naturally suggests that some approximate cloning of entanglement is nevertheless allowed by quantum mechanics. We investigate a separability-preserving optimal cloning machine that duplicates all maximally entangled states of two qubits, resulting in 0.285 bits of entanglement per clone, while a local cloning machine only yields 0.060 bits of entanglement per clone.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Axelrod, David A; Vagefi, Parsia A; Roberts, John P
2015-08-01
The liver transplant allocation system has evolved to a ranking system of “sickest-first” system based on objective criteria. Yet, organs continue to be distributed first within OPOs and regions that are largely based on historical practice patterns related to kidney transplantation and were never designed to minimize waitlist death or equalize opportunity for liver transplant. The current proposal is a move to enhance survival though the application of modern mathematical techniques to optimize liver distribution. Like MELDbased allocation, it will never be perfect and should be continually evaluated and revised. However, the disparity in access, which favors those residing in or able to travel to privileged areas, to the detriment of the patients dying on the list in underserved areas, is simply not defensible in 2015.
Surveillance versus Reconnaissance: An Entropy Based Model
2012-03-22
sensor detection since no new information is received. (Berry, Pontecorvo, & Fogg , Optimal Search, Location and Tracking of Surface Maritime Targets by...by Berry, Pontecorvo and Fogg (Berry, Pontecorvo, & Fogg , July, 2003) facilitates the optimal solutions to dynamically determining the allocation and...region (Berry, Pontecorvo, & Fogg , July, 2003). Phase II: Locate During the locate phase, the objective was to determine the location of the targets
Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System
NASA Astrophysics Data System (ADS)
Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju
2018-03-01
A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
Xu, M; Li, Y; Kang, T Z; Zhang, T S; Ji, J H; Yang, S W
2016-11-14
Two orthogonal modulation optical label switching(OLS) schemes, which are based on payload of polarization multiplexing-differential quadrature phase shift keying(POLMUX-DQPSK or PDQ) modulated with identifications of duobinary (DB) label and pulse position modulation(PPM) label, are researched in high bit-rate OLS network. The BER performance of hybrid modulation with payload and label signals are discussed and evaluated in theory and simulation. The theoretical BER expressions of PDQ, PDQ-DB and PDQ-PPM are given with analysis method of hybrid modulation encoding in different the bit-rate ratios of payload and label. Theoretical derivation results are shown that the payload of hybrid modulation has a certain gain of receiver sensitivity than payload without label. The sizes of payload BER gain obtained from hybrid modulation are related to the different types of label. The simulation results are consistent with that of theoretical conclusions. The extinction ratio (ER) conflicting between hybrid encoding of intensity and phase types can be compromised and optimized in OLS system of hybrid modulation. The BER analysis method of hybrid modulation encoding in OLS system can be applied to other n-ary hybrid modulation or combination modulation systems.
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
NASA Astrophysics Data System (ADS)
Maslov, A. L.; Markova, I. Yu; Zakharova, E. S.; Polushin, N. I.; Laptev, A. I.
2017-05-01
It is known that modern drilling bit body undergoes significant abrasive wear in the contact area with the solid and the retracted cuttings. For protection of the body rationally use wear-resistant coating, which is welded directly to the body of bit. Before mass use of the developed coverings they need to be investigated by various methods that it was possible to characterize coatings and on the basis of the obtained data to perform optimization of both composition of coatings and technology. Such methods include microstructural studies tribological tests, crack resistance and others. This work is devoted to the tribological tests of imported brand of coatings WokaDur NiA and and domestic brand of coating HR-6750 (both brands manufactured by Ltd “Oerlikon Metco Rus”), used to protect the bit body from abrasive wear.
A decomposition approach to the design of a multiferroic memory bit
NASA Astrophysics Data System (ADS)
Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.
2017-06-01
The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.
Morrell, Roger J.; Larson, David A.; Ruzzi, Peter L.
1994-01-01
A double acting bit holder that permits bits held in it to be resharpened during cutting action to increase energy efficiency by reducing the amount of small chips produced. The holder consist of: a stationary base portion capable of being fixed to a cutter head of an excavation machine and having an integral extension therefrom with a bore hole therethrough to accommodate a pin shaft; a movable portion coextensive with the base having a pin shaft integrally extending therefrom that is insertable in the bore hole of the base member to permit the moveable portion to rotate about the axis of the pin shaft; a recess in the movable portion of the holder to accommodate a shank of a bit; and a biased spring disposed in adjoining openings in the base and moveable portions of the holder to permit the moveable portion to pivot around the pin shaft during cutting action of a bit fixed in a turret to allow front, mid and back positions of the bit during cutting to lessen creation of small chip amounts and resharpen the bit during excavation use.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Chaos-based wireless communication resisting multipath effects.
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
Chaos-based wireless communication resisting multipath effects
NASA Astrophysics Data System (ADS)
Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso
2017-09-01
In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.
NASA Technical Reports Server (NTRS)
Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul
1995-01-01
Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
GIS and Game Theory for Water Resource Management
NASA Astrophysics Data System (ADS)
Ganjali, N.; Guney, C.
2017-11-01
In this study, aspects of Game theory and its application on water resources management combined with GIS techniques are detailed. First, each term is explained and the advantages and limitations of its aspect is discussed. Then, the nature of combinations between each pair and literature on the previous studies are given. Several cases were investigated and results were magnified in order to conclude with the applicability and combination of GIS- Game Theory- Water Resources Management. It is concluded that the game theory is used relatively in limited studies of water management fields such as cost/benefit allocation among users, water allocation among trans-boundary users in water resources, water quality management, groundwater management, analysis of water policies, fair allocation of water resources development cost and some other narrow fields. Also, Decision-making in environmental projects requires consideration of trade-offs between socio-political, environmental, and economic impacts and is often complicated by various stakeholder views. Most of the literature on water allocation and conflict problems uses traditional optimization models to identify the most efficient scheme while the Game Theory, as an optimization method, combined GIS are beneficial platforms for agent based models to be used in solving Water Resources Management problems in the further studies.
NASA Astrophysics Data System (ADS)
Kim, Hyo-Su; Kim, Dong-Hoi
The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
Liang, Jie; Gao, Xiang; Zeng, Guangming; Hua, Shanshan; Zhong, Minzhou; Li, Xiaodong; Li, Xin
2018-01-09
Climate change and human activities cause uncertain changes to species biodiversity by altering their habitat. The uncertainty of climate change requires planners to balance the benefit and cost of making conservation plan. Here optimal protection approach for Lesser White-fronted Goose (LWfG) by coupling Modern Portfolio Theory (MPT) and Marxan selection were proposed. MPT was used to provide suggested weights of investment for protected area (PA) and reduce the influence of climatic uncertainty, while Marxan was utilized to choose a series of specific locations for PA. We argued that through combining these two commonly used techniques with the conservation plan, including assets allocation and PA chosing, the efficiency of rare bird's protection would be enhanced. In MPT analyses, the uncertainty of conservation-outcome can be reduced while conservation effort was allocated in Hunan, Jiangxi and Yangtze River delta. In Marxan model, the optimal location for habitat restorations based on existing nature reserve was identified. Clear priorities for the location and allocation of assets could be provided based on this research, and it could help decision makers to build conservation strategy for LWfG.
NASA Astrophysics Data System (ADS)
Lim, Meng-Hui; Teoh, Andrew Beng Jin
2011-12-01
Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.
Unconditionally secure commitment in position-based quantum cryptography.
Nadeem, Muhammad
2014-10-27
A new commitment scheme based on position-verification and non-local quantum correlations is presented here for the first time in literature. The only credential for unconditional security is the position of committer and non-local correlations generated; neither receiver has any pre-shared data with the committer nor does receiver require trusted and authenticated quantum/classical channels between him and the committer. In the proposed scheme, receiver trusts the commitment only if the scheme itself verifies position of the committer and validates her commitment through non-local quantum correlations in a single round. The position-based commitment scheme bounds committer to reveal valid commitment within allocated time and guarantees that the receiver will not be able to get information about commitment unless committer reveals. The scheme works for the commitment of both bits and qubits and is equally secure against committer/receiver as well as against any third party who may have interests in destroying the commitment. Our proposed scheme is unconditionally secure in general and evades Mayers and Lo-Chau attacks in particular.
Achieving the Holevo bound via a bisection decoding protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosati, Matteo; Giovannetti, Vittorio
2016-06-15
We present a new decoding protocol to realize transmission of classical information through a quantum channel at asymptotically maximum capacity, achieving the Holevo bound and thus the optimal communication rate. At variance with previous proposals, our scheme recovers the message bit by bit, making use of a series of “yes-no” measurements, organized in bisection fashion, thus determining which codeword was sent in log{sub 2} N steps, N being the number of codewords.
Design of a reversible single precision floating point subtractor.
Anantha Lakshmi, Av; Sudha, Gf
2014-01-04
In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.
Li, Guangxia; An, Kang; Gao, Bin; Zheng, Gan
2017-01-01
This paper proposes novel satellite-based wireless sensor networks (WSNs), which integrate the WSN with the cognitive satellite terrestrial network. Having the ability to provide seamless network access and alleviate the spectrum scarcity, cognitive satellite terrestrial networks are considered as a promising candidate for future wireless networks with emerging requirements of ubiquitous broadband applications and increasing demand for spectral resources. With the emerging environmental and energy cost concerns in communication systems, explicit concerns on energy efficient resource allocation in satellite networks have also recently received considerable attention. In this regard, this paper proposes energy-efficient optimal power allocation schemes in the cognitive satellite terrestrial networks for non-real-time and real-time applications, respectively, which maximize the energy efficiency (EE) of the cognitive satellite user while guaranteeing the interference at the primary terrestrial user below an acceptable level. Specifically, average interference power (AIP) constraint is employed to protect the communication quality of the primary terrestrial user while average transmit power (ATP) or peak transmit power (PTP) constraint is adopted to regulate the transmit power of the satellite user. Since the energy-efficient power allocation optimization problem belongs to the nonlinear concave fractional programming problem, we solve it by combining Dinkelbach’s method with Lagrange duality method. Simulation results demonstrate that the fading severity of the terrestrial interference link is favorable to the satellite user who can achieve EE gain under the ATP constraint comparing to the PTP constraint. PMID:28869546
A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems
Chung, Young-uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon
2014-01-01
Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377
Multiple Interacting Risk Factors: On Methods for Allocating Risk Factor Interactions.
Price, Bertram; MacNicoll, Michael
2015-05-01
A persistent problem in health risk analysis where it is known that a disease may occur as a consequence of multiple risk factors with interactions is allocating the total risk of the disease among the individual risk factors. This problem, referred to here as risk apportionment, arises in various venues, including: (i) public health management, (ii) government programs for compensating injured individuals, and (iii) litigation. Two methods have been described in the risk analysis and epidemiology literature for allocating total risk among individual risk factors. One method uses weights to allocate interactions among the individual risk factors. The other method is based on risk accounting axioms and finding an optimal and unique allocation that satisfies the axioms using a procedure borrowed from game theory. Where relative risk or attributable risk is the risk measure, we find that the game-theory-determined allocation is the same as the allocation where risk factor interactions are apportioned to individual risk factors using equal weights. Therefore, the apportionment problem becomes one of selecting a meaningful set of weights for allocating interactions among the individual risk factors. Equal weights and weights proportional to the risks of the individual risk factors are discussed. © 2015 Society for Risk Analysis.
Two phase sampling for wheat acreage estimation. [large area crop inventory experiment
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Hay, C. M.
1977-01-01
A two phase LANDSAT-based sample allocation and wheat proportion estimation method was developed. This technique employs manual, LANDSAT full frame-based wheat or cultivated land proportion estimates from a large number of segments comprising a first sample phase to optimally allocate a smaller phase two sample of computer or manually processed segments. Application to the Kansas Southwest CRD for 1974 produced a wheat acreage estimate for that CRD within 2.42 percent of the USDA SRS-based estimate using a lower CRD inventory budget than for a simulated reference LACIE system. Factor of 2 or greater cost or precision improvements relative to the reference system were obtained.
Spatiotemporal analysis of prior appropriations water calls
NASA Astrophysics Data System (ADS)
Elbakidze, Levan; Shen, Xiaozhe; Taylor, Garth; Mooney, SiâN.
2012-06-01
A spatiotemporal model is developed to examine prior appropriations-based water curtailment in Idaho's Snake River Plain Aquifer. Using a 100 year horizon, prior appropriations-based curtailment supplemented with optimized water use reductions is shown to produce a spatial distribution of water use reductions that differs from that produced by regulatory curtailment based strictly on initial water right assignments. Discounted profits over 100 years of crop production are up to 7% higher when allocation is optimized. Total pumping over 100 years is 0.3%, 3%, and 40% higher under 1, 10, and 100 year prior appropriations-based regulatory curtailment, respectively.
Faithful Transfer Arbitrary Pure States with Mixed Resources
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing; Li, Lin; Ma, Song-Ya; Chen, Xiu-Bo; Yang, Yi-Xian
2013-09-01
In this paper, we show that some special mixed quantum resource experience the same property of pure entanglement such as Bell state for quantum teleportation. It is shown that one mixed state and three bits of classical communication cost can be used to teleport one unknown qubit compared with two bits via pure resources. The schemes are easily implement with model physical techniques. Moreover, these resources are also optimal and typical for faithfully remotely prepare an arbitrary qubit, two-qubit and three-qubit states with mixed quantum resources. Our schemes are completed as same as those with pure quantum entanglement resources except only 1 bit additional classical communication cost required. The success probability is independent of the form of the mixed resources.
Implications of scaling on static RAM bit cell stability and reliability
NASA Astrophysics Data System (ADS)
Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael
1993-01-01
In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.
Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization
NASA Astrophysics Data System (ADS)
Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan
This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.
An analytical optimization model for infrared image enhancement via local context
NASA Astrophysics Data System (ADS)
Xu, Yongjian; Liang, Kun; Xiong, Yiru; Wang, Hui
2017-12-01
The requirement for high-quality infrared images is constantly increasing in both military and civilian areas, and it is always associated with little distortion and appropriate contrast, while infrared images commonly have some shortcomings such as low contrast. In this paper, we propose a novel infrared image histogram enhancement algorithm based on local context. By constraining the enhanced image to have high local contrast, a regularized analytical optimization model is proposed to enhance infrared images. The local contrast is determined by evaluating whether two intensities are neighbors and calculating their differences. The comparison on 8-bit images shows that the proposed method can enhance the infrared images with more details and lower noise.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
NASA Astrophysics Data System (ADS)
Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.
2017-05-01
In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasner, Evan; Bearden, Sean; Žutić, Igor, E-mail: zigor@buffalo.edu
Digital operation of lasers with injected spin-polarized carriers provides an improved operation over their conventional counterparts with spin-unpolarized carriers. Such spin-lasers can attain much higher bit rates, crucial for optical communication systems. The overall quality of a digital signal in these two types of lasers is compared using eye diagrams and quantified by improved Q-factors and bit-error-rates in spin-lasers. Surprisingly, an optimal performance of spin-lasers requires finite, not infinite, spin-relaxation times, giving a guidance for the design of future spin-lasers.
Acetylcholine molecular arrays enable quantum information processing
NASA Astrophysics Data System (ADS)
Tamulis, Arvydas; Majauskaite, Kristina; Talaikis, Martynas; Zborowski, Krzysztof; Kairys, Visvaldas
2017-09-01
We have found self-assembly of four neurotransmitter acetylcholine (ACh) molecular complexes in a water molecules environment by using geometry optimization with DFT B97d method. These complexes organizes to regular arrays of ACh molecules possessing electronic spins, i.e. quantum information bits. These spin arrays could potentially be controlled by the application of a non-uniform external magnetic field. The proper sequence of resonant electromagnetic pulses would then drive all the spin groups into the 3-spin entangled state and proceed large scale quantum information bits.
2005-10-01
late the difficulty of some basic 1-bit and n-bit quantum and classical operations in an simple unconstrained scenario. KEY WORDS: Time evolution... quantum circuit and design are presented for an optimized entangling probe attacking the BB84 Protocol of quantum key distribution (QKD) and yielding...unambiguous, at least some of the time. It follows that the BB84 (Bennett-Brassard 1984) proto- col of quantum key distribution has a vulnerability similar to
Allocating HIV prevention funds in the United States: recommendations from an optimization model.
Lasry, Arielle; Sansom, Stephanie L; Hicks, Katherine A; Uzunangelov, Vladislav
2012-01-01
The Centers for Disease Control and Prevention (CDC) had an annual budget of approximately $327 million to fund health departments and community-based organizations for core HIV testing and prevention programs domestically between 2001 and 2006. Annual HIV incidence has been relatively stable since the year 2000 and was estimated at 48,600 cases in 2006 and 48,100 in 2009. Using estimates on HIV incidence, prevalence, prevention program costs and benefits, and current spending, we created an HIV resource allocation model that can generate a mathematically optimal allocation of the Division of HIV/AIDS Prevention's extramural budget for HIV testing, and counseling and education programs. The model's data inputs and methods were reviewed by subject matter experts internal and external to the CDC via an extensive validation process. The model projects the HIV epidemic for the United States under different allocation strategies under a fixed budget. Our objective is to support national HIV prevention planning efforts and inform the decision-making process for HIV resource allocation. Model results can be summarized into three main recommendations. First, more funds should be allocated to testing and these should further target men who have sex with men and injecting drug users. Second, counseling and education interventions ought to provide a greater focus on HIV positive persons who are aware of their status. And lastly, interventions should target those at high risk for transmitting or acquiring HIV, rather than lower-risk members of the general population. The main conclusions of the HIV resource allocation model have played a role in the introduction of new programs and provide valuable guidance to target resources and improve the impact of HIV prevention efforts in the United States.
Optimal co-allocation of carbon and nitrogen in a forest stand at steady state
Annikki Makela; Harry T. Valentine; Helja-Sisko Helmisaari
2008-01-01
Nitrogen (N) is essential for plant production, but N uptake imposes carbon (C) costs through maintenance respiration and fine-root construction, suggesting that an optimal C:N balance can be found. Previous studies have elaborated this optimum under exponential growth; work on closed canopies has focused on foliage only. Here, the optimal co-allocation of C and N to...
Mathematical modeling of PDC bit drilling process based on a single-cutter mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojtanowicz, A.K.; Kuru, E.
1993-12-01
An analytical development of a new mechanistic drilling model for polycrystalline diamond compact (PDC) bits is presented. The derivation accounts for static balance of forces acting on a single PDC cutter and is based on assumed similarity between bit and cutter. The model is fully explicit with physical meanings given to all constants and functions. Three equations constitute the mathematical model: torque, drilling rate, and bit life. The equations comprise cutter`s geometry, rock properties drilling parameters, and four empirical constants. The constants are used to match the model to a PDC drilling process. Also presented are qualitative and predictive verificationsmore » of the model. Qualitative verification shows that the model`s response to drilling process variables is similar to the behavior of full-size PDC bits. However, accuracy of the model`s predictions of PDC bit performance is limited primarily by imprecision of bit-dull evaluation. The verification study is based upon the reported laboratory drilling and field drilling tests as well as field data collected by the authors.« less
Initial Effects of Heavy Vehicle Trafficking on Vegetated Soils
2012-08-01
ER D C/ CR R EL T R -1 2 -6 Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) Initial Effects of Heavy Vehicle...the outdoor loam test section. Optimal Allocation of Land for Training and Non-training Uses ( OPAL ) ERDC/CRREL TR-12-6 August 2012 Initial...mal Allocation of Land for Training and Non-Training Uses ( OPAL ) Pro- gram. The work was conducted by Nicole Buck and Sally Shoop of the Force
2014-03-27
Their chromosome representation is a binary string of 13 actions or 39 bits. Plans consist of a limited number of build actions for the creation of...injected via case-injection which resembles case-base reasoning. Expert actions are recorded and then transformed into chromosomes for injection into GAPs...sites supply a finite amount of a resource. For example, a gold mine in AOE will disappear after a player’s workers have extracted the finite amount of
NASA Astrophysics Data System (ADS)
Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik
2015-05-01
In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.
I/O-aware bandwidth allocation for petascale computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhou; Yang, Xu; Zhao, Dongfang
In the Big Data era, the gap between the storage performance and an appli- cation's I/O requirement is increasing. I/O congestion caused by concurrent storage accesses from multiple applications is inevitable and severely harms the performance. Conventional approaches either focus on optimizing an ap- plication's access pattern individually or handle I/O requests on a low-level storage layer without any knowledge from the upper-level applications. In this paper, we present a novel I/O-aware bandwidth allocation framework to coordinate ongoing I/O requests on petascale computing systems. The motivation behind this innovation is that the resource management system has a holistic view ofmore » both the system state and jobs' activities and can dy- namically control the jobs' status or allocate resource on the y during their execution. We treat a job's I/O requests as periodical subjobs within its lifecycle and transform the I/O congestion issue into a classical scheduling problem. Based on this model, we propose a bandwidth management mech- anism as an extension to the existing scheduling system. We design several bandwidth allocation policies with different optimization objectives either on user-oriented metrics or system performance. We conduct extensive trace- based simulations using real job traces and I/O traces from a production IBM Blue Gene/Q system at Argonne National Laboratory. Experimental results demonstrate that our new design can improve job performance by more than 30%, as well as increasing system performance.« less
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Hooshyar, Milad
2014-11-01
Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.
Turbodrills and innovative PDC bits economically drilled hard formations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boudreaux, R.C.; Massey, K.
1994-03-28
The use of turbodrills and polycrystalline diamond compact (PDC) bits with an innovative, tracking cutting structure has improved drilling economics in medium and hard formations in the Gulf of Mexico. Field results have confirmed that turbodrilling with trackset PDC bits reduced drilling costs, compared to offset wells. The combination of turbodrills and trackset bits has been used successfully in a broad range of applications and with various drilling parameters. Formations ranging from medium shales to hard, abrasive sands have been successfully and economically drilled. The tools have been used in both water-based and oil-based muds. Additionally, the turbo-drill and tracksetmore » PDC bit combination has been stable on directional drilling applications. The locking effect of the cutting structure helps keep the bit on course.« less
Resource Allocation Algorithms for the Next Generation Cellular Networks
NASA Astrophysics Data System (ADS)
Amzallag, David; Raz, Danny
This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.
Modified allocation capacitated planning model in blood supply chain management
NASA Astrophysics Data System (ADS)
Mansur, A.; Vanany, I.; Arvitrida, N. I.
2018-04-01
Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.
Estimating risk of foreign exchange portfolio: Using VaR and CVaR based on GARCH-EVT-Copula model
NASA Astrophysics Data System (ADS)
Wang, Zong-Run; Chen, Xiao-Hong; Jin, Yan-Bo; Zhou, Yan-Ju
2010-11-01
This paper introduces GARCH-EVT-Copula model and applies it to study the risk of foreign exchange portfolio. Multivariate Copulas, including Gaussian, t and Clayton ones, were used to describe a portfolio risk structure, and to extend the analysis from a bivariate to an n-dimensional asset allocation problem. We apply this methodology to study the returns of a portfolio of four major foreign currencies in China, including USD, EUR, JPY and HKD. Our results suggest that the optimal investment allocations are similar across different Copulas and confidence levels. In addition, we find that the optimal investment concentrates on the USD investment. Generally speaking, t Copula and Clayton Copula better portray the correlation structure of multiple assets than Normal Copula.
NASA Astrophysics Data System (ADS)
Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo
2018-04-01
This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.
Optimal H1N1 vaccination strategies based on self-interest versus group interest.
Shim, Eunha; Meyers, Lauren Ancel; Galvani, Alison P
2011-02-25
Influenza vaccination is vital for reducing H1N1 infection-mediated morbidity and mortality. To reduce transmission and achieve herd immunity during the initial 2009-2010 pandemic season, the US Centers for Disease Control and Prevention (CDC) recommended that initial priority for H1N1 vaccines be given to individuals under age 25, as these individuals are more likely to spread influenza than older adults. However, due to significant delay in vaccine delivery for the H1N1 influenza pandemic, a large fraction of population was exposed to the H1N1 virus and thereby obtained immunity prior to the wide availability of vaccines. This exposure affects the spread of the disease and needs to be considered when prioritizing vaccine distribution. To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission and considered the impact of delayed vaccination. Our results indicate that if individuals decide to vaccinate according to self-interest, the resulting optimal vaccination strategy would prioritize adults of age 25 to 49 followed by either preschool-age children before the pandemic peak or older adults (age 50-64) at the pandemic peak. In contrast, the vaccine allocation strategy that is optimal for the population as a whole would prioritize individuals of ages 5 to 64 to curb a growing pandemic regardless of the timing of the vaccination program. Our results indicate that for a delayed vaccine distribution, the priorities that are optimal at a population level do not align with those that are optimal according to individual self-interest. Moreover, the discordance between the optimal vaccine distributions based on individual self-interest and those based on population interest is even more pronounced when vaccine availability is delayed. To determine optimal vaccine allocation for pandemic influenza, public health agencies need to consider both the changes in infection risks among age groups and expected patterns of adherence.
Mapping from multiple-control Toffoli circuits to linear nearest neighbor quantum circuits
NASA Astrophysics Data System (ADS)
Cheng, Xueyun; Guan, Zhijin; Ding, Weiping
2018-07-01
In recent years, quantum computing research has been attracting more and more attention, but few studies on the limited interaction distance between quantum bits (qubit) are deeply carried out. This paper presents a mapping method for transforming multiple-control Toffoli (MCT) circuits into linear nearest neighbor (LNN) quantum circuits instead of traditional decomposition-based methods. In order to reduce the number of inserted SWAP gates, a novel type of gate with the optimal LNN quantum realization was constructed, namely NNTS gate. The MCT gate with multiple control bits could be better cascaded by the NNTS gates, in which the arrangement of the input lines was LNN arrangement of the MCT gate. Then, the communication overhead measurement model on inserted SWAP gate count from the original arrangement to the new arrangement was put forward, and we selected one of the LNN arrangements with the minimum SWAP gate count. Moreover, the LNN arrangement-based mapping algorithm was given, and it dealt with the MCT gates in turn and mapped each MCT gate into its LNN form by inserting the minimum number of SWAP gates. Finally, some simplification rules were used, which can further reduce the final quantum cost of the LNN quantum circuit. Experiments on some benchmark MCT circuits indicate that the direct mapping algorithm results in fewer additional SWAP gates in about 50%, while the average improvement rate in quantum cost is 16.95% compared to the decomposition-based method. In addition, it has been verified that the proposed method has greater superiority for reversible circuits cascaded by MCT gates with more control bits.
Lee, Preston V; Dinu, Valentin
2015-11-04
Our publication of the BitTorious portal [1] demonstrated the ability to create a privatized distributed data warehouse of sufficient magnitude for real-world bioinformatics studies using minimal changes to the standard BitTorrent tracker protocol. In this second phase, we release a new server-side specification to accept anonymous philantropic storage donations by the general public, wherein a small portion of each user's local disk may be used for archival of scientific data. We have implementated the server-side announcement and control portions of this BitTorrent extension into v3.0.0 of the BitTorious portal, upon which compatible clients may be built. Automated test cases for the BitTorious Volunteer extensions have been added to the portal's v3.0.0 release, supporting validation of the "peer affinity" concept and announcement protocol introduced by this specification. Additionally, a separate reference implementation of affinity calculation has been provided in C++ for informaticians wishing to integrate into libtorrent-based projects. The BitTorrent "affinity" extensions as provided in the BitTorious portal reference implementation allow data publishers to crowdsource the extreme storage prerequisites for research in "big data" fields. With sufficient awareness and adoption of BitTorious Volunteer-based clients by the general public, the BitTorious portal may be able to provide peta-scale storage resources to the scientific community at relatively insignificant financial cost.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
ATCA digital controller hardware for vertical stabilization of plasmas in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, A. J. N.; Sousa, J.; Varandas, C. A. F.
2006-10-15
The efficient vertical stabilization (VS) of plasmas in tokamaks requires a fast reaction of the VS controller, for example, after detection of edge localized modes (ELM). For controlling the effects of very large ELMs a new digital control hardware, based on the Advanced Telecommunications Computing Architecture trade mark sign (ATCA), is being developed aiming to reduce the VS digital control loop cycle (down to an optimal value of 10 {mu}s) and improve the algorithm performance. The system has 1 ATCA trade mark sign processor module and up to 12 ATCA trade mark sign control modules, each one with 32 analogmore » input channels (12 bit resolution), 4 analog output channels (12 bit resolution), and 8 digital input/output channels. The Aurora trade mark sign and PCI Express trade mark sign communication protocols will be used for data transport, between modules, with expected latencies below 2 {mu}s. Control algorithms are implemented on a ix86 based processor with 6 Gflops and on field programmable gate arrays with 80 GMACS, interconnected by serial gigabit links in a full mesh topology.« less
Optimal allocation of industrial PV-storage micro-grid considering important load
NASA Astrophysics Data System (ADS)
He, Shaohua; Ju, Rong; Yang, Yang; Xu, Shuai; Liang, Lei
2018-03-01
At present, the industrial PV-storage micro-grid has been widely used. This paper presents an optimal allocation model of PV-storage micro-grid capacity considering the important load of industrial users. A multi-objective optimization model is established to promote the local extinction of PV power generation and the maximum investment income of the enterprise as the objective function. Particle swarm optimization (PSO) is used to solve the case of a city in Jiangsu Province, the results are analyzed economically.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Analysis of DNA Sequences by An Optical Time-Integrating Correlator: Proof-Of-Concept Experiments.
1992-05-01
TABLES xv LIST OF ABBREVIATIONS xvii 1.0 INTRODUCTION 1 2.0 DNA ANALYSIS STRATEGY 4 2.1 Representation of DNA Bases 4 2.2 DNA Analysis Strategy 6 3.0...Zehnder architecture. 3 Figure 3: Short representations of the DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 5... DNA bases where each base is represented by 7-bits long pseudorandom sequences. 4 Table 2: Long representations of the DNA bases with 255-bits maximum
Compositional Verification with Abstraction, Learning, and SAT Solving
2015-05-01
arithmetic, and bit-vectors (currently, via bit-blasting). The front-end is based on an existing tool called UFO [8] which converts C programs to the Horn...supports propositional logic, linear arithmetic, and bit-vectors (via bit-blasting). The front-end is based on the tool UFO [8]. It encodes safety of...tool UFO [8]. The encoding in Horn-SMT only uses the theory of Linear Rational Arithmetic. All experiments were carried out on an Intel R© CoreTM2 Quad
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.
Guerrero, Jose; Oliver, Gabriel; Valero, Oscar
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Playing Games with Optimal Competitive Scheduling
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen
2005-01-01
This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, selfish preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource.
Deploying a quantum annealing processor to detect tree cover in aerial imagery of California
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.
2017-01-01
Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028
An interlaboratory study of TEX86 and BIT analysis of sediments, extracts, and standard mixtures
NASA Astrophysics Data System (ADS)
Schouten, Stefan; Hopmans, Ellen C.; Rosell-Melé, Antoni; Pearson, Ann; Adam, Pierre; Bauersachs, Thorsten; Bard, Edouard; Bernasconi, Stefano M.; Bianchi, Thomas S.; Brocks, Jochen J.; Carlson, Laura Truxal; Castañeda, Isla S.; Derenne, Sylvie; Selver, Ayça. Doǧrul; Dutta, Koushik; Eglinton, Timothy; Fosse, Celine; Galy, Valier; Grice, Kliti; Hinrichs, Kai-Uwe; Huang, Yongsong; Huguet, Arnaud; Huguet, Carme; Hurley, Sarah; Ingalls, Anitra; Jia, Guodong; Keely, Brendan; Knappy, Chris; Kondo, Miyuki; Krishnan, Srinath; Lincoln, Sara; Lipp, Julius; Mangelsdorf, Kai; Martínez-García, Alfredo; Ménot, Guillemette; Mets, Anchelique; Mollenhauer, Gesine; Ohkouchi, Naohiko; Ossebaar, Jort; Pagani, Mark; Pancost, Richard D.; Pearson, Emma J.; Peterse, Francien; Reichart, Gert-Jan; Schaeffer, Philippe; Schmitt, Gaby; Schwark, Lorenz; Shah, Sunita R.; Smith, Richard W.; Smittenberg, Rienk H.; Summons, Roger E.; Takano, Yoshinori; Talbot, Helen M.; Taylor, Kyle W. R.; Tarozo, Rafael; Uchida, Masao; van Dongen, Bart E.; Van Mooy, Benjamin A. S.; Wang, Jinxiang; Warren, Courtney; Weijers, Johan W. H.; Werne, Josef P.; Woltering, Martijn; Xie, Shucheng; Yamamoto, Masanobu; Yang, Huan; Zhang, Chuanlun L.; Zhang, Yige; Zhao, Meixun; Damsté, Jaap S. Sinninghe
2013-12-01
Two commonly used proxies based on the distribution of glycerol dialkyl glycerol tetraethers (GDGTs) are the TEX86 (TetraEther indeX of 86 carbon atoms) paleothermometer for sea surface temperature reconstructions and the BIT (Branched Isoprenoid Tetraether) index for reconstructing soil organic matter input to the ocean. An initial round-robin study of two sediment extracts, in which 15 laboratories participated, showed relatively consistent TEX86 values (reproducibility ±3-4°C when translated to temperature) but a large spread in BIT measurements (reproducibility ±0.41 on a scale of 0-1). Here we report results of a second round-robin study with 35 laboratories in which three sediments, one sediment extract, and two mixtures of pure, isolated GDGTs were analyzed. The results for TEX86 and BIT index showed improvement compared to the previous round-robin study. The reproducibility, indicating interlaboratory variation, of TEX86 values ranged from 1.3 to 3.0°C when translated to temperature. These results are similar to those of other temperature proxies used in paleoceanography. Comparison of the results obtained from one of the three sediments showed that TEX86 and BIT indices are not significantly affected by interlaboratory differences in sediment extraction techniques. BIT values of the sediments and extracts were at the extremes of the index with values close to 0 or 1, and showed good reproducibility (ranging from 0.013 to 0.042). However, the measured BIT values for the two GDGT mixtures, with known molar ratios of crenarchaeol and branched GDGTs, had intermediate BIT values and showed poor reproducibility and a large overestimation of the "true" (i.e., molar-based) BIT index. The latter is likely due to, among other factors, the higher mass spectrometric response of branched GDGTs compared to crenarchaeol, which also varies among mass spectrometers. Correction for this different mass spectrometric response showed a considerable improvement in the reproducibility of BIT index measurements among laboratories, as well as a substantially improved estimation of molar-based BIT values. This suggests that standard mixtures should be used in order to obtain consistent, and molar-based, BIT values.
Optimized maritime emergency resource allocation under dynamic demand.
Zhang, Wenfen; Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand.
Optimized maritime emergency resource allocation under dynamic demand
Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand. PMID:29240792
Joint optimization of regional water-power systems
NASA Astrophysics Data System (ADS)
Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter
2016-06-01
Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.
Theoretical and subjective bit assignments in transform picture
NASA Technical Reports Server (NTRS)
Jones, H. W., Jr.
1977-01-01
It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders.
Balasubramanian, P; Yamashita, S
2016-01-01
This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology.
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Phylogeny determines flower size-dependent sex allocation at flowering in a hermaphroditic family.
Teixido, A L; Guzmán, B; Staggemeier, V G; Valladares, F
2017-11-01
In animal-pollinated hermaphroditic plants, optimal floral allocation determines relative investment into sexes, which is ultimately dependent on flower size. Larger flowers disproportionally increase maleness whereas smaller and less rewarding flowers favour female function. Although floral traits are considered strongly conserved, phylogenetic relationships in the interspecific patterns of resource allocation to floral sex remain overlooked. We investigated these patterns in Cistaceae, a hermaphroditic family. We reconstructed phylogenetic relationships among Cistaceae species and quantified phylogenetic signal for flower size, dry mass and nutrient allocation to floral structures in 23 Mediterranean species using Blomberg's K-statistic. Lastly, phylogenetically-controlled correlational and regression analyses were applied to examine flower size-based allometry in resource allocation to floral structures. Sepals received the highest dry mass allocation, followed by petals, whereas sexual structures increased nutrient allocation. Flower size and resource allocation to floral structures, except for carpels, showed a strong phylogenetic signal. Larger-flowered species allometrically allocated more resources to maleness, by increasing allocation to corollas and stamens. Our results suggest a major role of phylogeny in determining interspecific changes in flower size and subsequent floral sex allocation. This implies that flower size balances the male-female function over the evolutionary history of Cistaceae. While allometric resource investment in maleness is inherited across species diversification, allocation to the female function seems a labile trait that varies among closely related species that have diversified into different ecological niches. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
NASA Astrophysics Data System (ADS)
Menshikh, V.; Samorokovskiy, A.; Avsentev, O.
2018-03-01
The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.
Sharing the Wealth: Factors Influencing Resource Allocation in the Sharing Game
ERIC Educational Resources Information Center
Fantino, Edmund; Kennelly, Arthur
2009-01-01
Students chose between two allocation options, one that gave the allocator more and another participant still more (the "optimal" choice) and one which gave the allocator less and the other participant still less (the "competitive" choice). In a within-subjects design, students' behavior patterns were significantly correlated across the two rounds…
Energy Technology Allocation for Distributed Energy Resources: A Technology-Policy Framework
NASA Astrophysics Data System (ADS)
Mallikarjun, Sreekanth
Distributed energy resources (DER) are emerging rapidly. New engineering technologies, materials, and designs improve the performance and extend the range of locations for DER. In contrast, constructing new or modernizing existing high voltage transmission lines for centralized generation are expensive and challenging. In addition, customer demand for reliability has increased and concerns about climate change have created a pull for swift renewable energy penetration. In this context, DER policy makers, developers, and users are interested in determining which energy technologies to use to accommodate different end-use energy demands. We present a two-stage multi-objective strategic technology-policy framework for determining the optimal energy technology allocation for DER. The framework simultaneously considers economic, technical, and environmental objectives. The first stage utilizes a Data Envelopment Analysis model for each end-use to evaluate the performance of each energy technology based on the three objectives. The second stage incorporates factor efficiencies determined in the first stage, capacity limitations, dispatchability, and renewable penetration for each technology, and demand for each end-use into a bottleneck multi-criteria decision model which provides the Pareto-optimal energy resource allocation. We conduct several case studies to understand the roles of various distributed energy technologies in different scenarios. We construct some policy implications based on the model results of set of case studies.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
Allocating conservation resources between areas where persistence of a species is uncertain.
McDonald-Madden, Eve; Chadès, Iadine; McCarthy, Michael A; Linkie, Matthew; Possingham, Hugh P
2011-04-01
Research on the allocation of resources to manage threatened species typically assumes that the state of the system is completely observable; for example whether a species is present or not. The majority of this research has converged on modeling problems as Markov decision processes (MDP), which give an optimal strategy driven by the current state of the system being managed. However, the presence of threatened species in an area can be uncertain. Typically, resource allocation among multiple conservation areas has been based on the biggest expected benefit (return on investment) but fails to incorporate the risk of imperfect detection. We provide the first decision-making framework for confronting the trade-off between information and return on investment, and we illustrate the approach for populations of the Sumatran tiger (Panthera tigris sumatrae) in Kerinci Seblat National Park. The problem is posed as a partially observable Markov decision process (POMDP), which extends MDP to incorporate incomplete detection and allows decisions based on our confidence in particular states. POMDP has previously been used for making optimal management decisions for a single population of a threatened species. We extend this work by investigating two populations, enabling us to explore the importance of variation in expected return on investment between populations on how we should act. We compare the performance of optimal strategies derived assuming complete (MDP) and incomplete (POMDP) observability. We find that uncertainty about the presence of a species affects how we should act. Further, we show that assuming full knowledge of a species presence will deliver poorer strategic outcomes than if uncertainty about a species status is explicitly considered. MDP solutions perform up to 90% worse than the POMDP for highly cryptic species, and they only converge in performance when we are certain of observing the species during management: an unlikely scenario for many threatened species. This study illustrates an approach to allocating limited resources to threatened species where the conservation status of the species in different areas is uncertain. The results highlight the importance of including partial observability in future models of optimal species management when the species of concern is cryptic in nature.
Successive equimarginal approach for optimal design of a pump and treat system
NASA Astrophysics Data System (ADS)
Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.
2007-08-01
An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
NASA Astrophysics Data System (ADS)
Sun, Xiuqiao; Wang, Jian
2018-07-01
Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
Utilization-Based Modeling and Optimization for Cognitive Radio Networks
NASA Astrophysics Data System (ADS)
Liu, Yanbing; Huang, Jun; Liu, Zhangxiong
The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.
Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems
NASA Astrophysics Data System (ADS)
Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo
2017-07-01
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
[Organ allocation system: between efficiency and equity].
Antoine, Corinne
2007-02-15
Despite considerable efforts to promote organ donation and increase the amount of organ retrieval, demand for grafts is increasing and remains much higher then availability. This short supply is noticeable for all organ transplantation whether for heart, lungs, liver or pancreas but mainly for kidneys. The objective of graft allocation and attribution rules is to insure an allocation as fair as possible, to find the best recipient, to take into account the emergency of the need for grafting or the access difficulty for certain patients, and to seek optimal graft usage. These rules are based on the setting up of priority categories for patients whose lives are threatened on a very short-term basis or for those having difficult access to transplantation. This implies the issue of seeking the balance between an allocation as fair as possible and technical constraints associated with organ retrieval, transportation and graft quality preservation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hariharan, P.R.; Azar, J.J.
1996-09-01
A good majority of all oilwell drilling occurs in shale and other clay-bearing rocks. In the light of relatively fewer studies conducted, the problem of bit-balling in PDC bits while drilling shale has been addressed with the primary intention of attempting to quantify the degree of balling, as well as to investigate the influence of bit design and confining pressures. A series of full-scale laboratory drilling tests under simulated down hole conditions were conducted utilizing seven different PDC bits in Catoosa shale. Test results have indicated that the non-dimensional parameter R{sub d} [(bit torque).(weight-on-bit)/(bit diameter)] is a good indicator ofmore » the degree of bit-balling and that it correlated well with Specific-Energy. Furthermore, test results have shown bit-profile and bit-hydraulic design to be key parameters of bit design that dictate the tendency of balling in shales under a given set of operating conditions. A bladed bit was noticed to ball less compared to a ribbed or open-faced bit. Likewise, related to bit profile, test results have indicated that the parabolic profile has a lesser tendency to ball compared to round and flat profiles. The tendency of PDC bits to ball was noticed to increase with increasing confining pressures for the set of drilling conditions used.« less
NASA Astrophysics Data System (ADS)
Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard
2017-07-01
Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Optimized design and analysis of preclinical intervention studies in vivo
Laajala, Teemu D.; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-01-01
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578
Optimized design and analysis of preclinical intervention studies in vivo.
Laajala, Teemu D; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-08-02
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions.
NASA Astrophysics Data System (ADS)
Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velázquez, Manuel
2015-04-01
Adaptation to global change is a key issue in the planning of water resource systems in a changing world. Adaptation has to be efficient, but also equitable in the share of the costs of joint adaptation at the river basin scale. Least-cost hydro-economic optimization models have been helpful at defining efficient adaptation strategies. However, they often rely on the assumption of a "perfect cooperation" among the stakeholders, required for reaching the optimal solution. Nowadays, most adaptation decisions have to be agreed among the different actors in charge of their implementation, thus challenging the validity of a perfect command-and-control solution. As a first attempt to over-pass this limitation, our work presents a method to allocate the cost of an efficient adaptation programme of measures among the different stakeholders at the river basin scale. Principles of equity are used to define cost allocation scenarios from different perspectives, combining elements from cooperative game theory and axioms from social justice to bring some "food for thought" in the decision making process of adaptation. To illustrate the type of interactions between stakeholders in a river basin, the method has been applied in a French case study, the Orb river basin. Located on the northern rim of the Mediterranean Sea, this river basin is experiencing changes in demand patterns, and its water resources will be impacted by climate change, calling for the design of an adaptation plan. A least-cost river basin optimization model (LCRBOM) has been developed under GAMS to select the combination of demand- and supply-side adaptation measures that allows meeting quantitative water management targets at the river basin scale in a global change context. The optimal adaptation plan encompasses measures in both agricultural and urban sectors, up-stream and down-stream of the basin, disregarding the individual interests of the stakeholders. In order to ensure equity in the cost allocation of the adaptation plan, different allocation scenarios are considered. The LCRBOM allows defining a solution space based on economic rationality concepts from cooperative game theory (the core of the game), and then, to define equitable allocation of the cost of the programme of measures (the Shapley value and the nucleolus). Moreover, alternative allocation scenarios have been considered based on axiomatic principles of social justice, such as "utilitarian", "prior rights" or "strict equality", applied in the case study area. The comparison of the cost allocation scenarios brings insight to inform the decision making process at the river basin scale and potentially reap the efficiency gains from cooperation in the design of adaptation plan. The study has been partially supported by the IMPADAPT project /CGL2013-48424-C2-1-R) from the Spanish ministry MINECO (Ministerio de Economía y Competitividad) and European FEDER funds. Corentin Girard is supported by a grant from the University Lecturer Training Program (FPU12/03803) of the Ministry of Education, Culture and Sports of Spain.
NASA Astrophysics Data System (ADS)
Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh
2017-06-01
Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karve, Abhijit A.; Alexoff, David; Kim, Dohyun
Although important aspects of whole-plant carbon allocation in crop plants (e.g., to grain) occur late in development when the plants are large, techniques to study carbon transport and allocation processes have not been adapted for large plants. Positron emission tomography (PET), developed for dynamic imaging in medicine, has been applied in plant studies to measure the transport and allocation patterns of carbohydrates, nutrients, and phytohormones labeled with positron-emitting radioisotopes. However, the cost of PET and its limitation to smaller plants has restricted its use in plant biology. Here we describe the adaptation and optimization of a commercial clinical PET scannermore » to measure transport dynamics and allocation patterns of 11C-photoassimilates in large crops. Based on measurements of a phantom, we optimized instrument settings, including use of 3-D mode and attenuation correction to maximize the accuracy of measurements. To demonstrate the utility of PET, we measured 11C-photoassimilate transport and allocation in Sorghum bicolor, an important staple crop, at vegetative and reproductive stages (40 and 70 days after planting; DAP). The 11C-photoassimilate transport speed did not change over the two developmental stages. However, within a stem, transport speeds were reduced across nodes, likely due to higher 11C-photoassimilate unloading in the nodes. Photosynthesis in leaves and the amount of 11C that was exported to the rest of the plant decreased as plants matured. In young plants, exported 11C was allocated mostly (88 %) to the roots and stem, but in flowering plants (70 DAP) the majority of the exported 11C (64 %) was allocated to the apex. Our results show that commercial PET scanners can be used reliably to measure whole-plant C-allocation in large plants nondestructively including, importantly, allocation to roots in soil. This capability revealed extreme changes in carbon allocation in sorghum plants, as they advanced to maturity. Further, our results suggest that nodes may be important control points for photoassimilate distribution in crops of the family Poaceae. In conclusion, quantifying real-time carbon allocation and photoassimilate transport dynamics, as demonstrated here, will be important for functional genomic studies to unravel the mechanisms controlling phloem transport in large crop plants, which will provide crucial insights for improving yields.« less
Karve, Abhijit A.; Alexoff, David; Kim, Dohyun; ...
2015-11-09
Although important aspects of whole-plant carbon allocation in crop plants (e.g., to grain) occur late in development when the plants are large, techniques to study carbon transport and allocation processes have not been adapted for large plants. Positron emission tomography (PET), developed for dynamic imaging in medicine, has been applied in plant studies to measure the transport and allocation patterns of carbohydrates, nutrients, and phytohormones labeled with positron-emitting radioisotopes. However, the cost of PET and its limitation to smaller plants has restricted its use in plant biology. Here we describe the adaptation and optimization of a commercial clinical PET scannermore » to measure transport dynamics and allocation patterns of 11C-photoassimilates in large crops. Based on measurements of a phantom, we optimized instrument settings, including use of 3-D mode and attenuation correction to maximize the accuracy of measurements. To demonstrate the utility of PET, we measured 11C-photoassimilate transport and allocation in Sorghum bicolor, an important staple crop, at vegetative and reproductive stages (40 and 70 days after planting; DAP). The 11C-photoassimilate transport speed did not change over the two developmental stages. However, within a stem, transport speeds were reduced across nodes, likely due to higher 11C-photoassimilate unloading in the nodes. Photosynthesis in leaves and the amount of 11C that was exported to the rest of the plant decreased as plants matured. In young plants, exported 11C was allocated mostly (88 %) to the roots and stem, but in flowering plants (70 DAP) the majority of the exported 11C (64 %) was allocated to the apex. Our results show that commercial PET scanners can be used reliably to measure whole-plant C-allocation in large plants nondestructively including, importantly, allocation to roots in soil. This capability revealed extreme changes in carbon allocation in sorghum plants, as they advanced to maturity. Further, our results suggest that nodes may be important control points for photoassimilate distribution in crops of the family Poaceae. In conclusion, quantifying real-time carbon allocation and photoassimilate transport dynamics, as demonstrated here, will be important for functional genomic studies to unravel the mechanisms controlling phloem transport in large crop plants, which will provide crucial insights for improving yields.« less
Liu, Yaolin; Peng, Jinjin; Jiao, Limin; Liu, Yanfang
2016-01-01
Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA) using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders' preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization) in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning.
Liu, Yaolin; Peng, Jinjin; Jiao, Limin; Liu, Yanfang
2016-01-01
Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA) using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders’ preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization) in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning. PMID:27322619
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macevicz, S.C.
1979-05-09
This thesis attempts to explain the evolution of certain features of social insect colony population structure by the use of optimization models. Two areas are examined in detail. First, the optimal reproductive strategies of annual eusocial insects are considered. A model is constructed for the growth of workers and reproductives as a function of the resources allocated to each. Next the allocation schedule is computed which yields the maximum number of reproductives by season's end. The results indicate that if there is constant return to scale for allocated resources the optimal strategy is to invest in colony growth until approximatelymore » one generation before season's end, whereupon worker production ceases and reproductive effort is switched entirely to producing queens and males. Furthermore, the results indicate that if there is decreasing return to scale for allocated resources then simultaneous production of workers and reproductives is possible. The model is used to explain the colony demography of two species of wasp, Polistes fuscatus and Vespa orientalis. Colonies of these insects undergo a sudden switch from the production of workers to the production of reproductives. The second area examined concerns optimal forager size distributions for monomorphic ant colonies. A model is constructed that describes the colony's energetic profit as a function which depends on the size distribution of food resources as well as forager efficiency, metabolic costs, and manufacturing costs.« less
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
Optimal fire histories for biodiversity conservation.
Kelly, Luke T; Bennett, Andrew F; Clarke, Michael F; McCarthy, Michael A
2015-04-01
Fire is used as a management tool for biodiversity conservation worldwide. A common objective is to avoid population extinctions due to inappropriate fire regimes. However, in many ecosystems, it is unclear what mix of fire histories will achieve this goal. We determined the optimal fire history of a given area for biological conservation with a method that links tools from 3 fields of research: species distribution modeling, composite indices of biodiversity, and decision science. We based our case study on extensive field surveys of birds, reptiles, and mammals in fire-prone semi-arid Australia. First, we developed statistical models of species' responses to fire history. Second, we determined the optimal allocation of successional states in a given area, based on the geometric mean of species relative abundance. Finally, we showed how conservation targets based on this index can be incorporated into a decision-making framework for fire management. Pyrodiversity per se did not necessarily promote vertebrate biodiversity. Maximizing pyrodiversity by having an even allocation of successional states did not maximize the geometric mean abundance of bird species. Older vegetation was disproportionately important for the conservation of birds, reptiles, and small mammals. Because our method defines fire management objectives based on the habitat requirements of multiple species in the community, it could be used widely to maximize biodiversity in fire-prone ecosystems. © 2014 Society for Conservation Biology.
Channel Allocation in Wireless Integrated Services Networks for Low-Bit-Rate Applications.
1998-06-01
server remains idle until the beginning of the next slot, even if cells arrive in the meanwhile.7 The server is assumed to be non - preemptive , i.e., it...If the ToE of the cell is smaller than 1/C^(the service time): i) Discard the cell. 2. Sort the remaining cells in the queue in a non -decreasing...126 Next, the cell-loss-probability ratios (CLPR) of non -empty sources (i.e., having at least one cell in the queue ) defined as ratios between the
Graded bit patterned magnetic arrays fabricated via angled low-energy He ion irradiation.
Chang, L V; Nasruallah, A; Ruchhoeft, P; Khizroev, S; Litvinov, D
2012-07-11
A bit patterned magnetic array based on Co/Pd magnetic multilayers with a binary perpendicular magnetic anisotropy distribution was fabricated. The binary anisotropy distribution was attained through angled helium ion irradiation of a bit edge using hydrogen silsesquioxane (HSQ) resist as an ion stopping layer to protect the rest of the bit. The viability of this technique was explored numerically and evaluated through magnetic measurements of the prepared bit patterned magnetic array. The resulting graded bit patterned magnetic array showed a 35% reduction in coercivity and a 9% narrowing of the standard deviation of the switching field.
Design and implementation of the next generation Landsat satellite communications system
Mah, Grant R.; O'Brien, Michael; Garon, Howard; Mott, Claire; Ames, Alan; Dearth, Ken
2012-01-01
The next generation Landsat satellite, Landsat 8 (L8), also known as the Landsat Data Continuity Mission (LDCM), uses a highly spectrally efficient modulation and data formatting approach to provide large amounts of downlink (D/L) bandwidth in a limited X-Band spectrum allocation. In addition to purely data throughput and bandwidth considerations, there were a number of additional constraints based on operational considerations for prevention of interference with the NASA Deep-Space Network (DSN) band just above the L8 D/L band, minimization of jitter contributions to prevent impacts to instrument performance, and the need to provide an interface to the Landsat International Cooperator (IC) community. A series of trade studies were conducted to consider either X- or Ka-Band, modulation type, and antenna coverage type, prior to the release of the request for proposal (RFP) for the spacecraft. Through use of the spectrally efficient rate-7/8 Low-Density Parity-Check error-correction coding and novel filtering, an XBand frequency plan was developed that balances all the constraints and considerations, while providing world-class link performance, fitting 384 Mbits/sec of data into the 375 MHz X-Band allocation with bit-error rates better than 10-12 using an earth-coverage antenna.
Acquisition and Retaining Granular Samples via a Rotating Coring Bit
NASA Technical Reports Server (NTRS)
Bar-Cohen, Yoseph; Badescu, Mircea; Sherrit, Stewart
2013-01-01
This device takes advantage of the centrifugal forces that are generated when a coring bit is rotated, and a granular sample is entered into the bit while it is spinning, making it adhere to the internal wall of the bit, where it compacts itself into the wall of the bit. The bit can be specially designed to increase the effectiveness of regolith capturing while turning and penetrating the subsurface. The bit teeth can be oriented such that they direct the regolith toward the bit axis during the rotation of the bit. The bit can be designed with an internal flute that directs the regolith upward inside the bit. The use of both the teeth and flute can be implemented in the same bit. The bit can also be designed with an internal spiral into which the various particles wedge. In another implementation, the bit can be designed to collect regolith primarily from a specific depth. For that implementation, the bit can be designed such that when turning one way, the teeth guide the regolith outward of the bit and when turning in the opposite direction, the teeth will guide the regolith inward into the bit internal section. This mechanism can be implemented with or without an internal flute. The device is based on the use of a spinning coring bit (hollow interior) as a means of retaining granular sample, and the acquisition is done by inserting the bit into the subsurface of a regolith, soil, or powder. To demonstrate the concept, a commercial drill and a coring bit were used. The bit was turned and inserted into the soil that was contained in a bucket. While spinning the bit (at speeds of 600 to 700 RPM), the drill was lifted and the soil was retained inside the bit. To prove this point, the drill was turned horizontally, and the acquired soil was still inside the bit. The basic theory behind the process of retaining unconsolidated mass that can be acquired by the centrifugal forces of the bit is determined by noting that in order to stay inside the interior of the bit, the frictional force must be greater than the weight of the sample. The bit can be designed with an internal sleeve to serve as a container for granular samples. This tube-shaped component can be extracted upon completion of the sampling, and the bottom can be capped by placing the bit onto a corklike component. Then, upon removal of the internal tube, the top section can be sealed. The novel features of this device are: center dot A mechanism of acquiring and retaining granular samples using a coring bit without a closed door. center dot An acquisition bit that has internal structure such as a waffle pattern for compartmentalizing or helical internal flute to propel the sample inside the bit and help in acquiring and retaining granular samples. center dot A bit with an internal spiral into which the various particles wedge. center dot A design that provides a method of testing frictional properties of the granular samples and potentially segregating particles based on size and density. A controlled acceleration or deceleration may be used to drop the least-frictional particles or to eventually shear the unconsolidated material near the bit center.
Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation
NASA Astrophysics Data System (ADS)
Huang, Aiping; Tao, Linwei; Niu, Yilong
2018-04-01
In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.
NASA Astrophysics Data System (ADS)
Damgård, Ivan; Keller, Marcel
We propose several variants of a secure multiparty computation protocol for AES encryption. The best variant requires 2200 + {{400}over{255}} expected elementary operations in expected 70 + {{20}over{255}} rounds to encrypt one 128-bit block with a 128-bit key. We implemented the variants using VIFF, a software framework for implementing secure multiparty computation (MPC). Tests with three players (passive security against at most one corrupted player) in a local network showed that one block can be encrypted in 2 seconds. We also argue that this result could be improved by an optimized implementation.
Using a cVEP-Based Brain-Computer Interface to Control a Virtual Agent.
Riechmann, Hannes; Finke, Andrea; Ritter, Helge
2016-06-01
Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.
NASA Astrophysics Data System (ADS)
Bourdine, Anton V.; Zhukov, Alexander E.
2017-04-01
High bit rate laser-based data transmission over silica optical fibers with enlarged core diameter in comparison with standard singlemode fibers is found variety infocommunication applications. Since IEEE 802.3z standard was ratified on 1998 this technique started to be widely used for short-range in-premises distributed multi-Gigabit networks based on new generation laser optimized multimode fibers 50/125 of Cat. OM2…OM4. Nowadays it becomes to be in demand for on-board cable systems and industrial network applications requiring 1Gps and more bit rates over fibers with extremely enlarged core diameter up to 100 μm. This work presents an alternative method for design the special refractive index profiles of silica few-mode fibers with extremely enlarged core diameter, that provides modal bandwidth enhancing under a few-mode regime of laser-based data optical transmission. Here some results are presented concerning with refractive index profile synthesis for few-mode fibers with reduced differential mode delay for "O"-band central region, as well as computed differential mode delay spectral curves corresponding to profiles for fibers 50/125 and 100/125 for in-premises and on-board/industrial cable systems.
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
NASA Astrophysics Data System (ADS)
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
The past, present and future of HIV, AIDS and resource allocation
2009-01-01
Background How should HIV and AIDS resources be allocated to achieve the greatest possible impact? This paper begins with a theoretical discussion of this issue, describing the key elements of an "evidence-based allocation strategy". While it is noted that the quality of epidemiological and economic data remains inadequate to define such an optimal strategy, there do exist tools and research which can lead countries in a way that they can make allocation decisions. Furthermore, there are clear indications that most countries are not allocating their HIV and AIDS resources in a way which is likely to achieve the greatest possible impact. For example, it is noted that neighboring countries, even when they have a similar prevalence of HIV, nonetheless often allocate their resources in radically different ways. These differing allocation patterns appear to be attributable to a number of different issues, including a lack of data, contradictory results in existing data, a need for overemphasizing a multisectoral response, a lack of political will, a general inefficiency in the use of resources when they do get allocated, poor planning and a lack of control over the way resources get allocated. Methods There are a number of tools currently available which can improve the resource-allocation process. Tools such as the Resource Needs Model (RNM) can provide policymakers with a clearer idea of resource requirements, whereas other tools such as Goals and the Allocation by Cost-Effectiveness (ABCE) models can provide countries with a clearer vision of how they might reallocate funds. Results Examples from nine different countries provide information about how policymakers are trying to make their resource-allocation strategies more "evidence based". By identifying the challenges and successes of these nine countries in making more informed allocation decisions, it is hoped that future resource-allocation decisions for all countries can be improved. Conclusion We discuss the future of resource allocation, noting the types of additional data which will be required and the improvements in existing tools which could be made. PMID:19922688
Learning to assign binary weights to binary descriptor
NASA Astrophysics Data System (ADS)
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
NASA Astrophysics Data System (ADS)
Li, Xinying; Xiao, Jiangnan
2015-06-01
We propose a novel scheme for optical frequency-locked multi-carrier generation based on one electro-absorption modulated laser (EML) and one phase modulator (PM) in cascade driven by different sinusoidal radio-frequency (RF) clocks. The optimal operating zone for the cascaded EML and PM is found out based on theoretical analysis and numerical simulation. We experimentally demonstrate 25 optical subcarriers with frequency spacing of 12.5 GHz and power difference less than 5 dB can be generated based on the cascaded EML and PM operating in the optimal zone, which agrees well with the numerical simulation. We also experimentally demonstrate 28-Gbaud polarization division multiplexing quadrature phase shift keying (PDM-QPSK) modulated coherent optical transmission based on the cascaded EML and PM. The bit error ratio (BER) can be below the pre-forward-error-correction (pre-FEC) threshold of 3.8 × 10-3 after 80-km single-mode fiber-28 (SMF-28) transmission.
Skyrmion-skyrmion and skyrmion-edge repulsions in skyrmion-based racetrack memory
NASA Astrophysics Data System (ADS)
Zhang, Xichao; Zhao, G. P.; Fangohr, Hans; Liu, J. Ping; Xia, W. X.; Xia, J.; Morvan, F. J.
2015-01-01
Magnetic skyrmions are promising for building next-generation magnetic memories and spintronic devices due to their stability, small size and the extremely low currents needed to move them. In particular, skyrmion-based racetrack memory is attractive for information technology, where skyrmions are used to store information as data bits instead of traditional domain walls. Here we numerically demonstrate the impacts of skyrmion-skyrmion and skyrmion-edge repulsions on the feasibility of skyrmion-based racetrack memory. The reliable and practicable spacing between consecutive skyrmionic bits on the racetrack as well as the ability to adjust it are investigated. Clogging of skyrmionic bits is found at the end of the racetrack, leading to the reduction of skyrmion size. Further, we demonstrate an effective and simple method to avoid the clogging of skyrmionic bits, which ensures the elimination of skyrmionic bits beyond the reading element. Our results give guidance for the design and development of future skyrmion-based racetrack memory.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan
2016-01-01
A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than −10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices. PMID:27982089
Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan
2016-12-16
A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than -10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices.
Design and implementation of power efficient 10-bit dual port SRAM on 28 nm technology
NASA Astrophysics Data System (ADS)
Gulati, Anmol; Gupta, Ashutosh; Murgai, Shruti; Bhaskar, Lala
2016-03-01
In this paper, 10 bit synchronous clock gated Dual port RAM has been designed. The negative latch based clock gating technique has been employed to optimize the power of the design. The design has been implemented on XV7K70T device, -3 speed grade, and kintex 7 FPGA family on Xilinx ISE Design Suite 14.7 using 28 nm technology. The design has been synthesized using Verilog HDL. We have been successful in achieving approximately 55 % reduction in total clock power, 81.55% reduction in BRAM power, 82.65%, 0.07%, 1.04% and 11.31% reduction in static power, 72.32%, 38.60%, 68.74% and 71.97%, reduction in dynamic power and 72.44%, 16.96%, 60.88% and 71.06% reduction in total supply power at 1 THz, 1GHz, 100 GHz and 1000 GHz frequency respectively. The power of the device has been calculated using XPower Analyzer tool of Xilinx ISE Design Suite 14.7.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Framework for power and activity factor allocation in a multiclass CDMA system
NASA Astrophysics Data System (ADS)
Wu, Xinzhou; Srikant, Rayadurgam
2001-07-01
We consider a multimedia CDMA uplink where there are multiple classes of users with different Quality-of-Service (QoS) requirements. Each user is modeled as an ON-OFF source, where in the ON state, the user transmits a fixed number of bits in each time slot and in the OFF state, the user is silent. The probability of being in the ON state, known as the activity factor, could be different for different users. Assuming a constant channel gain, we first characterize the set of transmit power levels, activity factors and number of users in each class that can be supported by a system with a given spreading gain under the constraint that each user's QoS requirement must be met. Using this characterization, we then present a utility function-based algorithm for choosing the activity factors of elastic users in the network.
Determining Optimal Allocation of Naval Obstetric Resources with Linear Programming
2013-12-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT DETERMINING OPTIMAL ALLOCATION OF NAVAL OBSTETRIC RESOURCES...Davis Approved for public release; distribution is unlimited THIS PAGE INTENTIONALLY LEFT BLANK REPORT DOCUMENTATION PAGE Form Approved...OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for
Xu, Lingwei; Zhang, Hao; Gulliver, T. Aaron
2016-01-01
The outage probability (OP) performance of multiple-relay incremental-selective decode-and-forward (ISDF) relaying mobile-to-mobile (M2M) sensor networks with transmit antenna selection (TAS) over N-Nakagami fading channels is investigated. Exact closed-form OP expressions for both optimal and suboptimal TAS schemes are derived. The power allocation problem is formulated to determine the optimal division of transmit power between the broadcast and relay phases. The OP performance under different conditions is evaluated via numerical simulation to verify the analysis. These results show that the optimal TAS scheme has better OP performance than the suboptimal scheme. Further, the power allocation parameter has a significant influence on the OP performance. PMID:26907282
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Optimizing prescribed fire allocation for managing fire risk in central Catalonia.
Alcasena, Fermín J; Ager, Alan A; Salis, Michele; Day, Michelle A; Vega-Garcia, Cristina
2018-04-15
We used spatial optimization to allocate and prioritize prescribed fire treatments in the fire-prone Bages County, central Catalonia (northeastern Spain). The goal of this study was to identify suitable strategic locations on forest lands for fuel treatments in order to: 1) disrupt major fire movements, 2) reduce ember emissions, and 3) reduce the likelihood of large fires burning into residential communities. We first modeled fire spread, hazard and exposure metrics under historical extreme fire weather conditions, including node influence grid for surface fire pathways, crown fraction burned and fire transmission to residential structures. Then, we performed an optimization analysis on individual planning areas to identify production possibility frontiers for addressing fire exposure and explore alternative prescribed fire treatment configurations. The results revealed strong trade-offs among different fire exposure metrics, showed treatment mosaics that optimize the allocation of prescribed fire, and identified specific opportunities to achieve multiple objectives. Our methods can contribute to improving the efficiency of prescribed fire treatment investments and wildfire management programs aimed at creating fire resilient ecosystems, facilitating safe and efficient fire suppression, and safeguarding rural communities from catastrophic wildfires. The analysis framework can be used to optimally allocate prescribed fire in other fire-prone areas within the Mediterranean region and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Task allocation among multiple intelligent robots
NASA Technical Reports Server (NTRS)
Gasser, L.; Bekey, G.
1987-01-01
Researchers describe the design of a decentralized mechanism for allocating assembly tasks in a multiple robot assembly workstation. Currently, the approach focuses on distributed allocation to explore its feasibility and its potential for adaptability to changing circumstances, rather than for optimizing throughput. Individual greedy robots make their own local allocation decisions using both dynamic allocation policies which propagate through a network of allocation goals, and local static and dynamic constraints describing which robots are elibible for which assembly tasks. Global coherence is achieved by proper weighting of allocation pressures propagating through the assembly plan. Deadlock avoidance and synchronization is achieved using periodic reassessments of local allocation decisions, ageing of allocation goals, and short-term allocation locks on goals.
2007-09-01
Power Control and Filter Boards (PCFB) are powered. The anticipated temperature range is based on a model, and like all models, it is subject to...voltage regulation, filtering , or averaging at room temperature , and with no rate applied. This data was taken at 1K samples/sec, and resulted in an...buffering or amplification should be done as near to the signal source as possible. The low pass filter was added to the rate, BIT, and temperature
Bit storage and bit flip operations in an electromechanical oscillator.
Mahboob, I; Yamaguchi, H
2008-05-01
The Parametron was first proposed as a logic-processing system almost 50 years ago. In this approach the two stable phases of an excited harmonic oscillator provide the basis for logic operations. Computer architectures based on LC oscillators were developed for this approach, but high power consumption and difficulties with integration meant that the Parametron was rendered obsolete by the transistor. Here we propose an approach to mechanical logic based on nanoelectromechanical systems that is a variation on the Parametron architecture and, as a first step towards a possible nanomechanical computer, we demonstrate both bit storage and bit flip operations.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.
Goto, Hayato
2016-02-22
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network
NASA Astrophysics Data System (ADS)
Goto, Hayato
2016-02-01
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.