Sample records for optimal sequential transmission

  1. Optimization of transmission scan duration for 15O PET study with sequential dual tracer administration using N-index.

    PubMed

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Oka, Hisashi; Miyake, Yoshinori; Iida, Hidehiro

    2010-06-01

    Cerebral blood flow (CBF), oxygen extraction fraction (OEF) and cerebral metabolic rate of O(2) (CMRO(2)) can be quantified by PET with the administration of H (2) (15) O and (15)O(2). Recently, a shortening in the duration of these measurements was achieved by the sequential administration of dual tracers of (15)O(2) and H (2) (15) O with PET acquisition and integration method (DARG method). A transmission scan is generally required for correcting photon attenuation in advance of PET scan. Although the DARG method can shorten the total study duration to around 30 min, the transmission scan duration has not been optimized and has possibility to shorten its duration. Our aim of this study was to determine the optimal duration for the transmission scan. We introduced 'N-index', which estimates the noise level on an image obtained by subtracting two statistically independent and physiologically equivalent images. The relationship between noise on functional images and duration of the transmission scan was investigated by N-index. We performed phantom studies to test whether the N-index reflects the pixel noise in a PET image. We also estimated the noise level by the N-index on CBF, OEF and CMRO(2) images from DARG method in clinical patients, and investigated an optimal true count of the transmission scan. We found tight correlation between pixel noise and N-index in the phantom study. By investigating relationship between the transmission scan duration and N-index value for the functional images by DARG method, we revealed that the transmission data with true counts of more than 40 Mcounts results in CBF, OEF, and CMRO(2) images of reasonable quantitative accuracy and quality. The present study suggests that further shortening of DARG measurement is possible by abridging the transmission scan. The N-index could be used to determine the optimal measurement condition when examining the quality of image.

  2. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  3. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    NASA Astrophysics Data System (ADS)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction corresponding to the accuracy and AC-feasiblity of the solution. This linearization was tested on the IEEE and Polish systems, which range from 14 to 3375 buses and 20 to 4161 transmission lines. It had an accuracy of 0.5% or less for all but the 30-bus system. It also solved in linear time with CPLEX, while the non-linear version solved in O(n1.11) to O(n1.39). The sequential linearization is slower than the nonlinear formulation for smaller problems, but faster for larger problems, and its linear computational time means it would continue solving faster for larger problems. A major consideration to implementing algorithms to solve the optimal generator dispatch is ensuring that the resulting prices from the algorithm will support the market. Since the sequential linearization is linear, it is convex, its marginal values are well-defined, and there is no duality gap. The prices and settlements obtained from the sequential linearization therefore can be used to run a market. This market will include extra prices and settlements for reactive power and voltage, compared to the present-day market, which is based on real power. An advantage of this is that there is a very clear pool that can be used for reactive power/voltage support payments, while presently there is not a clear pool to take them out of. This method also reveals how valuable reactive power and voltage are at different locations, which can enable better planning of reactive resource construction. Transmission switching increases the feasible region of the generator dispatch, which means there may be a better solution than without transmission switching. Power flows on transmission lines are not directly controllable; rather, the power flows according to how it is injected and the physical characteristics of the lines. Changing the network topology changes the physical characteristics, which changes the flows. This means that sets of generator dispatch that may have previously been infeasible due to the flow exceeding line constraints may be feasible, since the flows will be different and may meet line constraints. However, transmission switching is a mixed integer problem, which may have a very slow solution time. For economic switching, we examine a series of heuristics. We examine the congestion rent heuristic in detail and then examine many other heuristics at a higher level. Post-contingency corrective switching aims to fix issues in the power network after a line or generator outage. In Chapter 7, we show that using the sequential linear program with corrective switching helps solve voltage and excessive flow issues. (Abstract shortened by UMI.).

  4. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  5. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid ismore » used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements« less

  6. Cost Optimal Design of a Power Inductor by Sequential Gradient Search

    NASA Astrophysics Data System (ADS)

    Basak, Raju; Das, Arabinda; Sanyal, Amarnath

    2018-05-01

    Power inductors are used for compensating VAR generated by long EHV transmission lines and in electronic circuits. For the EHV-lines, the rating of the inductor is decided upon by techno-economic considerations on the basis of the line-susceptance. It is a high voltage high current device, absorbing little active power and large reactive power. The cost is quite high- hence the design should be made cost-optimally. The 3-phase power inductor is similar in construction to a 3-phase core-type transformer with the exception that it has only one winding per phase and each limb is provided with an air-gap, the length of which is decided upon by the inductance required. In this paper, a design methodology based on sequential gradient search technique and the corresponding algorithm leading to cost-optimal design of a 3-phase EHV power inductor has been presented. The case-study has been made on a 220 kV long line of NHPC running from Chukha HPS to Birpara of Coochbihar.

  7. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  8. The potential impact of immunization campaign budget re-allocation on global eradication of paediatric infectious diseases

    PubMed Central

    2011-01-01

    Background The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Methods Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. Results For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Conclusions Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts. PMID:21955853

  9. The potential impact of immunization campaign budget re-allocation on global eradication of paediatric infectious diseases.

    PubMed

    Fitzpatrick, Tiffany; Bauch, Chris T

    2011-09-28

    The potential benefits of coordinating infectious disease eradication programs that use campaigns such as supplementary immunization activities (SIAs) should not be over-looked. One example of a coordinated approach is an adaptive "sequential strategy": first, all annual SIA budget is dedicated to the eradication of a single infectious disease; once that disease is eradicated, the annual SIA budget is re-focussed on eradicating a second disease, etc. Herd immunity suggests that a sequential strategy may eradicate several infectious diseases faster than a non-adaptive "simultaneous strategy" of dividing annual budget equally among eradication programs for those diseases. However, mathematical modeling is required to understand the potential extent of this effect. Our objective was to illustrate how budget allocation strategies can interact with the nonlinear nature of disease transmission to determine time to eradication of several infectious diseases under different budget allocation strategies. Using a mathematical transmission model, we analyzed three hypothetical vaccine-preventable infectious diseases in three different countries. A central decision-maker can distribute funding among SIA programs for these three diseases according to either a sequential strategy or a simultaneous strategy. We explored the time to eradication under these two strategies under a range of scenarios. For a certain range of annual budgets, all three diseases can be eradicated relatively quickly under the sequential strategy, whereas eradication never occurs under the simultaneous strategy. However, moderate changes to total SIA budget, SIA frequency, order of eradication, or funding disruptions can create disproportionately large differences in the time and budget required for eradication under the sequential strategy. We find that the predicted time to eradication can be very sensitive to small differences in the rate of case importation between the countries. We also find that the time to eradication of all three diseases is not necessarily lowest when the least transmissible disease is targeted first. Relatively modest differences in budget allocation strategies in the near-term can result in surprisingly large long-term differences in time required to eradicate, as a result of the amplifying effects of herd immunity and the nonlinearities of disease transmission. More sophisticated versions of such models may be useful to large international donors or other organizations as a planning or portfolio optimization tool, where choices must be made regarding how much funding to dedicate to different infectious disease eradication efforts.

  10. Information transmission in bosonic memory channels using Gaussian matrix-product states as near-optimal symbols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schäfer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.

    2014-12-04

    We seek for a realistic implementation of multimode Gaussian entangled states that can realize the optimal encoding for quantum bosonic Gaussian channels with memory. For a Gaussian channel with classical additive Markovian correlated noise and a lossy channel with non-Markovian correlated noise, we demonstrate the usefulness using Gaussian matrix-product states (GMPS). These states can be generated sequentially, and may, in principle, approximate well any Gaussian state. We show that we can achieve up to 99.9% of the classical Gaussian capacity with GMPS requiring squeezing parameters that are reachable with current technology. This may offer a way towards an experimental realization.

  11. Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.

    PubMed

    Carrière, Olivier; Hermand, Jean-Pierre

    2012-04-01

    Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.

  12. Optimal reactive planning with security constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.

    1995-12-31

    The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less

  13. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  14. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  15. An all-digital receiver for satellite audio broadcasting signals using trellis coded quasi-orthogonal code-division multiplexing

    NASA Astrophysics Data System (ADS)

    Braun, Walter; Eglin, Peter; Abello, Ricard

    1993-02-01

    Spread Spectrum Code Division Multiplex is an attractive scheme for the transmission of multiple signals over a satellite transponder. By using orthogonal or quasi-orthogonal spreading codes the interference between the users can be virtually eliminated. However, the acquisition and tracking of the spreading code phase can not take advantage of the code orthogonality since sequential acquisition and Delay-Locked loop tracking depend on correlation with code phases other than the optimal despreading phase. Hence, synchronization is a critical issue in such a system. A demonstration hardware for the verification of the orthogonal CDM synchronization and data transmission concept is being designed and implemented. The system concept, the synchronization scheme, and the implementation are described. The performance of the system is discussed based on computer simulations.

  16. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  17. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    PubMed

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  18. On sequential data assimilation for scalar macroscopic traffic flow models

    NASA Astrophysics Data System (ADS)

    Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel

    2012-09-01

    We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.

  19. Distributed Wireless Power Transfer With Energy Feedback

    NASA Astrophysics Data System (ADS)

    Lee, Seunghyun; Zhang, Rui

    2017-04-01

    Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.

  20. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  1. Irredundant Sequential Machines Via Optimal Logic Synthesis

    DTIC Science & Technology

    1989-10-01

    1989 Irredundant Sequential Machines Via Optimal Logic Synthesis NSrinivas Devadas , Hi-Keung Tony Ma, A. Richard Newton, and Alberto Sangiovanni- S...Agency under contract N00014-87-K-0825, and a grant from AT & T Bell Laboratories. Author Information Devadas : Department of Electrical Engineering...Sequential Machines Via Optimal Logic Synthesis Srinivas Devadas * Hi-Keung Tony ha. A. Richard Newton and Alberto Sangiovanni-Viucentelli Department of

  2. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  3. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    ERIC Educational Resources Information Center

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  4. Sequential quantum cloning under real-life conditions

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Mardoukhi, Yousof

    2012-05-01

    We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.

  5. Compensation of modal dispersion in multimode fiber systems using adaptive optics via convex optimization

    NASA Astrophysics Data System (ADS)

    Panicker, Rahul Alex

    Multimode fibers (MMF) are widely deployed in local-, campus-, and storage-area-networks. Achievable data rates and transmission distances are, however, limited by the phenomenon of modal dispersion. We propose a system to compensate for modal dispersion using adaptive optics. This leads to a 10- to 100-fold improvement in performance over current standards. We propose a provably optimal technique for minimizing inter-symbol interference (ISI) in MMF systems using adaptive optics via convex optimization. We use a spatial light modulator (SLM) to shape the spatial profile of light launched into an MMF. We derive an expression for the system impulse response in terms of the SLM reflectance and the field patterns of the MMF principal modes. Finding optimal SLM settings to minimize ISI, subject to physical constraints, is posed as an optimization problem. We observe that our problem can be cast as a second-order cone program, which is a convex optimization problem. Its global solution can, therefore, be found with minimal computational complexity. Simulations show that this technique opens up an eye pattern originally closed due to ISI. We then propose fast, low-complexity adaptive algorithms for optimizing the SLM settings. We show that some of these converge to the global optimum in the absence of noise. We also propose modified versions of these algorithms to improve resilience to noise and speed of convergence. Next, we experimentally compare the proposed adaptive algorithms in 50-mum graded-index (GRIN) MMFs using a liquid-crystal SLM. We show that continuous-phase sequential coordinate ascent (CPSCA) gives better bit-error-ratio performance than 2- or 4-phase sequential coordinate ascent, in concordance with simulations. We evaluate the bandwidth characteristics of CPSCA, and show that a single SLM is able to simultaneously compensate over up to 9 wavelength-division-multiplexed (WDM) 10-Gb/s channels, spaced by 50 GHz, over a total bandwidth of 450 GHz. We also show that CPSCA is able to compensate for modal dispersion over up to 2.2 km, even in the presence of mid-span connector offsets up to 4 mum (simulated in experiment by offset splices). A known non-adaptive launching technique using a fusion-spliced single-mode-to-multimode patchcord is shown to fail under these conditions. Finally, we demonstrate 10 x 10 Gb/s dense WDM transmission over 2.2 km of 50-mum GRIN MMF. We combine transmitter-based adaptive optics and receiver-based single-mode filtering, and control the launched field pattern for ten 10-Gb/s non-return-to-zero channels, wavelength-division multiplexed on a 200-GHz grid in the C band. We achieve error-free transmission through 2.2 km of 50-mum GRIN MMF for launch offsets up to 10 mum and for worst-case launched polarization. We employ a ten-channel transceiver based on parallel integration of electronics and photonics.

  6. Quantifying human-environment interactions using videography in the context of infectious disease transmission.

    PubMed

    Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A

    2018-05-08

    Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.

  7. Sequential Injection Analysis for Optimization of Molecular Biology Reactions

    PubMed Central

    Allen, Peter B.; Ellington, Andrew D.

    2011-01-01

    In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059

  8. Data Networks Reliability

    DTIC Science & Technology

    1988-10-03

    full achievable region is achievable if there is only a bounded degree of asynchronism. E. Arikan , in a Ph.D. thesis [Ari85], extended sequential...real co-operation is required to reduce the number of transmissions to O(log log N). 14 REFERENCES [Ari85] E. Arikan , "Sequential Decoding for Multiple

  9. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  10. Optimization of the gypsum-based materials by the sequential simplex method

    NASA Astrophysics Data System (ADS)

    Doleželová, Magdalena; Vimmrová, Alena

    2017-11-01

    The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.

  11. Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines.

    PubMed

    Biedka, Mathew M; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan

    2017-01-06

    Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency.

  12. Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines

    PubMed Central

    Biedka, Mathew M.; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan

    2017-01-01

    Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency. PMID:28059132

  13. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  14. Differential-Game Examination of Optimal Time-Sequential Fire-Support Strategies

    DTIC Science & Technology

    1976-09-01

    77 004033 NPS-55Tw76091 NAVAL POSTGRADUATE SCHOOL 4Monterey, California i ’ DIFFERENTIAL- GAME EXAMINATION OF OPTIMAL TIME-SEQUENTIAL FIRE...CATALOG NUMBER NPS-55Tw76091 4. TITLE (and Subtitle) S. TYPE OF REPDRT & PERIOD COVERED Differential- Game Examination of Optimal Tir Technical Report...NOTES 19. KEY WORDS (Continue on reverse side If necessary and identify by block number) Differential Games Lanchester Theory of Combat Military Tactics

  15. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  16. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  17. Estimation of parameters and basic reproduction ratio for Japanese encephalitis transmission in the Philippines using sequential Monte Carlo filter

    USDA-ARS?s Scientific Manuscript database

    We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...

  18. Optimization of a novel sequential alkalic and metal salt pretreatment for enhanced delignification and enzymatic saccharification of corn cobs.

    PubMed

    Sewsynker-Sukai, Yeshona; Gueguim Kana, E B

    2017-11-01

    This study presents a sequential sodium phosphate dodecahydrate (Na 3 PO 4 ·12H 2 O) and zinc chloride (ZnCl 2 ) pretreatment to enhance delignification and enzymatic saccharification of corn cobs. The effects of process parameters of Na 3 PO 4 ·12H 2 O concentration (5-15%), ZnCl 2 concentration (1-5%) and solid to liquid ratio (5-15%) on reducing sugar yield from corn cobs were investigated. The sequential pretreatment model was developed and optimized with a high coefficient of determination value (0.94). Maximum reducing sugar yield of 1.10±0.01g/g was obtained with 14.02% Na 3 PO 4 ·12H 2 O, 3.65% ZnCl 2 and 5% solid to liquid ratio. Scanning electron microscopy (SEM) and Fourier Transform Infrared analysis (FTIR) showed major lignocellulosic structural changes after the optimized sequential pretreatment with 63.61% delignification. In addition, a 10-fold increase in the sugar yield was observed compared to previous reports on the same substrate. This sequential pretreatment strategy was efficient for enhancing enzymatic saccharification of corn cobs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. C-SPECT - a Clinical Cardiac SPECT/Tct Platform: Design Concepts and Performance Potential

    PubMed Central

    Chang, Wei; Ordonez, Caesar E.; Liang, Haoning; Li, Yusheng; Liu, Jingai

    2013-01-01

    Because of scarcity of photons emitted from the heart, clinical cardiac SPECT imaging is mainly limited by photon statistics. The sub-optimal detection efficiency of current SPECT systems not only limits the quality of clinical cardiac SPECT imaging but also makes more advanced potential applications difficult to be realized. We propose a high-performance system platform - C-SPECT, which has its sampling geometry optimized for detection of emitted photons in quality and quantity. The C-SPECT has a stationary C-shaped gantry that surrounds the left-front side of a patient’s thorax. The stationary C-shaped collimator and detector systems in the gantry provide effective and efficient detection and sampling of photon emission. For cardiac imaging, the C-SPECT platform could achieve 2 to 4 times the system geometric efficiency of conventional SPECT systems at the same sampling resolution. This platform also includes an integrated transmission CT for attenuation correction. The ability of C-SPECT systems to perform sequential high-quality emission and transmission imaging could bring cost-effective high-performance to clinical imaging. In addition, a C-SPECT system could provide high detection efficiency to accommodate fast acquisition rate for gated and dynamic cardiac imaging. This paper describes the design concepts and performance potential of C-SPECT, and illustrates how these concepts can be implemented in a basic system. PMID:23885129

  20. Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization

    PubMed Central

    Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.

    2008-01-01

    The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929

  1. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  3. SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselmann, A; Mackie, T

    Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated inmore » order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at low cost. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC.« less

  4. Experimental transmission of avian-like swine H1N1 influenza virus between immunologically naïve and vaccinated pigs.

    PubMed

    Lloyd, Lucy E; Jonczyk, Magdalena; Jervis, Carley M; Flack, Deborah J; Lyall, John; Foote, Alasdair; Mumford, Jennifer A; Brown, Ian H; Wood, James L; Elton, Debra M

    2011-09-01

    Infection of pigs with swine influenza has been studied experimentally and in the field; however, little information is available on the natural transmission of this virus in pigs. Two studies in an experimental transmission model are presented here, one in immunologically naïve and one in a combination of vaccinated and naïve pigs. To investigate the transmission of a recent 'avian-like' swine H1N1 influenza virus in naive piglets, to assess the antibody response to a commercially available vaccine and to determine the efficiency of transmission in pigs after vaccination. Transmission chains were initiated by intranasal challenge of two immunologically naïve pigs. Animals were monitored daily for clinical signs and virus shedding. Pairs of pigs were sequentially co-housed, and once virus was detected in recipients, prior donors were removed. In the vaccination study, piglets were vaccinated and circulating antibody levels were monitored by haemagglutination inhibition assay. To study transmission in vaccinates, a pair of infected immunologically naïve animals was co-housed with vaccinated recipient pigs and further pairs of vaccinates were added sequentially as above. The chain was completed by the addition of naive pigs. Transmission of the H1N1 virus was achieved through a chain of six pairs of naïve piglets and through four pairs of vaccinated animals. Transmission occurred with minimal clinical signs and, in vaccinates, at antibody levels higher than previously reported to protect against infection. © 2011 Blackwell Publishing Ltd.

  5. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  6. Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations

    ERIC Educational Resources Information Center

    Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad

    2016-01-01

    In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…

  7. Multiuser signal detection using sequential decoding

    NASA Astrophysics Data System (ADS)

    Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.

    1990-05-01

    The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.

  8. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    PubMed

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  9. Studying Pulsed Laser Deposition conditions for Ni/C-based multi-layers

    NASA Astrophysics Data System (ADS)

    Bollmann, Tjeerd R. J.

    2018-04-01

    Nickel carbon based multi-layers are a viable route towards future hard X-ray and soft γ-ray focusing telescopes. Here, we study the Pulsed Laser Deposition growth conditions of such bilayers by Reflective High Energy Electron Diffraction, X-ray Reflectivity and Diffraction, Atomic Force Microscopy, X-ray Photoelectron Spectroscopy and cross-sectional Transmission Electron Microscopy analysis, with emphasis on optimization of process pressure and substrate temperature during growth. The thin multi-layers are grown on a treated SiO substrate resulting in Ni and C layers with surface roughnesses (RMS) of ≤0.2 nm. Small droplets resulting during melting of the targets surface increase the roughness, however, and cannot be avoided. The sequential process at temperatures beyond 300 °C results into intermixing between the two layers, being destructive for the reflectivity of the multi-layer.

  10. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  11. Optimizing Standard Sequential Extraction Protocol With Lake And Ocean Sediments

    EPA Science Inventory

    The environmental mobility/availability behavior of radionuclides in soils and sediments depends on their speciation. Experiments have been carried out to develop a simple but robust radionuclide sequential extraction method for identification of radionuclide partitioning in sed...

  12. Distributed Immune Systems for Wireless Network Information Assurance

    DTIC Science & Technology

    2010-04-26

    ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability

  13. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  14. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  15. Millimeter wave transmission studies of YBa2Cu3O7-delta thin films in the 26.5 to 40.0 GHz frequency range

    NASA Technical Reports Server (NTRS)

    Miranda, F. A.; Gordon, W. L.; Bhasin, K. B.; Heinen, V. O.; Warner, J. D.; Valco, G. J.

    1989-01-01

    Millimeter wave transmission measurements through YBa2Cu3O(7-delta) thin films on MgO, ZrO2 and LaAlO3 substrates, are reported. The films (approx. 1 micron) were deposited by sequential evaporation and laser ablation techniques. Transition temperatures T sub c, ranging from 89.7 K for the Laser Ablated film on LaAlO3 to approximately 72 K for the sequentially evaporated film on MgO, were obtained. The values of the real and imaginary parts of the complex conductivity, sigma 1 and sigma 2, are obtained from the transmission data, assuming a two fluid model. The BCS approach is used to calculate values for an effective energy gap from the obtained values of sigma sub 1. A range of gap values from 2 DELTA o/K sub B T sub c = 4.19 to 4.35 was obtained. The magnetic penetration depth is evaluated from the deduced values of sigma 2. These results are discussed together with the frequency dependence of the normalized transmission amplitude, P/P sub c, below and above T sub c.

  16. Parameters optimization for magnetic resonance coupling wireless power transmission.

    PubMed

    Li, Changsheng; Zhang, He; Jiang, Xiaohua

    2014-01-01

    Taking maximum power transmission and power stable transmission as research objectives, optimal design for the wireless power transmission system based on magnetic resonance coupling is carried out in this paper. Firstly, based on the mutual coupling model, mathematical expressions of optimal coupling coefficients for the maximum power transmission target are deduced. Whereafter, methods of enhancing power transmission stability based on parameters optimal design are investigated. It is found that the sensitivity of the load power to the transmission parameters can be reduced and the power transmission stability can be enhanced by improving the system resonance frequency or coupling coefficient between the driving/pick-up coil and the transmission/receiving coil. Experiment results are well conformed to the theoretical analysis conclusions.

  17. Co-Optimization of Electricity Transmission and Generation Resources for Planning and Policy Analysis: Review of Concepts and Modeling Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Ho, Jonathan; Hobbs, Benjamin F.

    2016-05-01

    The recognition of transmission's interaction with other resources has motivated the development of co-optimization methods to optimize transmission investment while simultaneously considering tradeoffs with investments in electricity supply, demand, and storage resources. For a given set of constraints, co-optimized planning models provide solutions that have lower costs than solutions obtained from decoupled optimization (transmission-only, generation-only, or iterations between them). This paper describes co-optimization and provides an overview of approaches to co-optimizing transmission options, supply-side resources, demand-side resources, and natural gas pipelines. In particular, the paper provides an up-to-date assessment of the present and potential capabilities of existing co-optimization tools, andmore » it discusses needs and challenges for developing advanced co-optimization models.« less

  18. Home | BEopt

    Science.gov Websites

    BEopt - Building Energy Optimization BEopt NREL - National Renewable Energy Laboratory Primary Energy Optimization) software provides capabilities to evaluate residential building designs and identify sequential search optimization technique used by BEopt: Finds minimum-cost building designs at different

  19. Random Boolean networks for autoassociative memory: Optimization and sequential learning

    NASA Astrophysics Data System (ADS)

    Sherrington, D.; Wong, K. Y. M.

    Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.

  20. Optimality of affine control system of several species in competition on a sequential batch reactor

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.

    2014-09-01

    In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.

  1. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  2. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  3. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  5. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  6. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  7. Three parameters optimizing closed-loop control in sequential segmental neuromuscular stimulation.

    PubMed

    Zonnevijlle, E D; Somia, N N; Perez Abadia, G; Stremel, R W; Maldonado, C J; Werker, P M; Kon, M; Barker, J H

    1999-05-01

    In conventional dynamic myoplasties, the force generation is poorly controlled. This causes unnecessary fatigue of the transposed/transplanted electrically stimulated muscles and causes damage to the involved tissues. We introduced sequential segmental neuromuscular stimulation (SSNS) to reduce muscle fatigue by allowing part of the muscle to rest periodically while the other parts work. Despite this improvement, we hypothesize that fatigue could be further reduced in some applications of dynamic myoplasty if the muscles were made to contract according to need. The first necessary step is to gain appropriate control over the contractile activity of the dynamic myoplasty. Therefore, closed-loop control was tested on a sequentially stimulated neosphincter to strive for the best possible control over the amount of generated pressure. A selection of parameters was validated for optimizing control. We concluded that the frequency of corrections, the threshold for corrections, and the transition time are meaningful parameters in the controlling algorithm of the closed-loop control in a sequentially stimulated myoplasty.

  8. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults

    PubMed Central

    Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146

  9. 14 day sequential therapy versus 10 day bismuth quadruple therapy containing high-dose esomeprazole in the first-line and second-line treatment of Helicobacter pylori: a multicentre, non-inferiority, randomized trial.

    PubMed

    Liou, Jyh-Ming; Chen, Chieh-Chang; Fang, Yu-Jen; Chen, Po-Yueh; Chang, Chi-Yang; Chou, Chu-Kuang; Chen, Mei-Jyh; Tseng, Cheng-Hao; Lee, Ji-Yuh; Yang, Tsung-Hua; Chiu, Min-Chin; Yu, Jian-Jyun; Kuo, Chia-Chi; Luo, Jiing-Chyuan; Hsu, Wen-Feng; Hu, Wen-Hao; Tsai, Min-Horn; Lin, Jaw-Town; Shun, Chia-Tung; Twu, Gary; Lee, Yi-Chia; Bair, Ming-Jong; Wu, Ming-Shiang

    2018-05-29

    Whether extending the treatment length and the use of high-dose esomeprazole may optimize the efficacy of Helicobacter pylori eradication remains unknown. To compare the efficacy and tolerability of optimized 14 day sequential therapy and 10 day bismuth quadruple therapy containing high-dose esomeprazole in first-line therapy. We recruited 620 adult patients (≥20 years of age) with H. pylori infection naive to treatment in this multicentre, open-label, randomized trial. Patients were randomly assigned to receive 14 day sequential therapy or 10 day bismuth quadruple therapy, both containing esomeprazole 40 mg twice daily. Those who failed after 14 day sequential therapy received rescue therapy with 10 day bismuth quadruple therapy and vice versa. Our primary outcome was the eradication rate in the first-line therapy. Antibiotic susceptibility was determined. ClinicalTrials.gov: NCT03156855. The eradication rates of 14 day sequential therapy and 10 day bismuth quadruple therapy were 91.3% (283 of 310, 95% CI 87.4%-94.1%) and 91.6% (284 of 310, 95% CI 87.8%-94.3%) in the ITT analysis, respectively (difference -0.3%, 95% CI -4.7% to 4.4%, P = 0.886). However, the frequencies of adverse effects were significantly higher in patients treated with 10 day bismuth quadruple therapy than those treated with 14 day sequential therapy (74.4% versus 36.7% P < 0.0001). The eradication rate of 14 day sequential therapy in strains with and without 23S ribosomal RNA mutation was 80% (24 of 30) and 99% (193 of 195), respectively (P < 0.0001). Optimized 14 day sequential therapy was non-inferior to, but better tolerated than 10 day bismuth quadruple therapy and both may be used in first-line treatment in populations with low to intermediate clarithromycin resistance.

  10. A versatile semi-permanent sequential bilayer/diblock polymer coating for capillary isoelectric focusing.

    PubMed

    Bahnasy, Mahmoud F; Lucy, Charles A

    2012-12-07

    A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  12. Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.

    PubMed

    Barrera, Roberto

    2009-07-01

    Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.

  13. Optimal mode transformations for linear-optical cluster-state generation

    DOE PAGES

    Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...

    2015-06-15

    In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less

  14. [Mumps vaccine virus transmission].

    PubMed

    Otrashevskaia, E V; Kulak, M V; Otrashevskaia, A V; Karpov, I A; Fisenko, E G; Ignat'ev, G M

    2013-01-01

    In this work we report the mumps vaccine virus shedding based on the laboratory confirmed cases of the mumps virus (MuV) infection. The likely epidemiological sources of the transmitted mumps virus were children who were recently vaccinated with the mumps vaccine containing Leningrad-Zagreb or Leningrad-3 MuV. The etiology of the described cases of the horizontal transmission of both mumps vaccine viruses was confirmed by PCR with the sequential restriction analysis.

  15. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    PubMed

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  16. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks

    PubMed Central

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-01-01

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062

  17. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  19. A Sequential Linear Quadratic Approach for Constrained Nonlinear Optimal Control with Adaptive Time Discretization and Application to Higher Elevation Mars Landing Problem

    NASA Astrophysics Data System (ADS)

    Sandhu, Amit

    A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.

  20. Cost analysis of an electricity supply chain using modification of price based dynamic economic dispatch in wheeling transaction scheme

    NASA Astrophysics Data System (ADS)

    Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad

    2018-04-01

    Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method

  1. Estimating the optimal dynamic antipsychotic treatment regime: Evidence from the sequential multiple assignment randomized CATIE Schizophrenia Study

    PubMed Central

    Shortreed, Susan M.; Moodie, Erica E. M.

    2012-01-01

    Summary Treatment of schizophrenia is notoriously difficult and typically requires personalized adaption of treatment due to lack of efficacy of treatment, poor adherence, or intolerable side effects. The Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE) Schizophrenia Study is a sequential multiple assignment randomized trial comparing the typical antipsychotic medication, perphenazine, to several newer atypical antipsychotics. This paper describes the marginal structural modeling method for estimating optimal dynamic treatment regimes and applies the approach to the CATIE Schizophrenia Study. Missing data and valid estimation of confidence intervals are also addressed. PMID:23087488

  2. Disease and disaster: Optimal deployment of epidemic control facilities in a spatially heterogeneous population with changing behaviour.

    PubMed

    Gaythorpe, Katy; Adams, Ben

    2016-05-21

    Epidemics of water-borne infections often follow natural disasters and extreme weather events that disrupt water management processes. The impact of such epidemics may be reduced by deployment of transmission control facilities such as clinics or decontamination plants. Here we use a relatively simple mathematical model to examine how demographic and environmental heterogeneities, population behaviour, and behavioural change in response to the provision of facilities, combine to determine the optimal configurations of limited numbers of facilities to reduce epidemic size, and endemic prevalence. We show that, if the presence of control facilities does not affect behaviour, a good general rule for responsive deployment to minimise epidemic size is to place them in exactly the locations where they will directly benefit the most people. However, if infected people change their behaviour to seek out treatment then the deployment of facilities offering treatment can lead to complex effects that are difficult to foresee. So careful mathematical analysis is the only way to get a handle on the optimal deployment. Behavioural changes in response to control facilities can also lead to critical facility numbers at which there is a radical change in the optimal configuration. So sequential improvement of a control strategy by adding facilities to an existing optimal configuration does not always produce another optimal configuration. We also show that the pre-emptive deployment of control facilities has conflicting effects. The configurations that minimise endemic prevalence are very different to those that minimise epidemic size. So cost-benefit analysis of strategies to manage endemic prevalence must factor in the frequency of extreme weather events and natural disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  4. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    PubMed

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  5. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    PubMed Central

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  6. Sequential and parallel image restoration: neural network implementations.

    PubMed

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  7. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  8. ADS: A FORTRAN program for automated design synthesis: Version 1.10

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1985-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.

  9. Fast-responding liquid crystal light-valve technology for color-sequential display applications

    NASA Astrophysics Data System (ADS)

    Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.

    1996-04-01

    A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.

  10. Optimal topologies for maximizing network transmission capacity

    NASA Astrophysics Data System (ADS)

    Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.

    2018-04-01

    It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.

  11. Optimization and Development of a Human Scent Collection Method

    DTIC Science & Technology

    2007-06-04

    19. Schoon, G. A. A., Scent Identification Lineups by Dogs (Canis familiaris): Experimental Design and Forensic Application. Applied Animal...Parker, Lloyd R., Morgan, Stephen L., Deming, Stanley N., Sequential Simplex Optimization. Chemometrics Series, ed. S.D. Brown. 1991, Boca Raton

  12. Topology-optimized broadband surface relief transmission grating

    NASA Astrophysics Data System (ADS)

    Andkjær, Jacob; Ryder, Christian P.; Nielsen, Peter C.; Rasmussen, Thomas; Buchwald, Kristian; Sigmund, Ole

    2014-03-01

    We propose a design methodology for systematic design of surface relief transmission gratings with optimized diffraction efficiency. The methodology is based on a gradient-based topology optimization formulation along with 2D frequency domain finite element simulations for TE and TM polarized plane waves. The goal of the optimization is to find a grating design that maximizes diffraction efficiency for the -1st transmission order when illuminated by unpolarized plane waves. Results indicate that a surface relief transmission grating can be designed with a diffraction efficiency of more than 40% in a broadband range going from the ultraviolet region, through the visible region and into the near-infrared region.

  13. Infrared Avionics Signal Distribution Using WDM

    NASA Technical Reports Server (NTRS)

    Atiquzzaman, Mohammed; Sluss, James J., Jr.

    2004-01-01

    Supporting analog RF signal transmission over optical fibers, this project demonstrates a successful application of wavelength division multiplexing (WDM) to the avionics environment. We characterize the simultaneous transmission of four RF signals (channels) over a single optical fiber. At different points along a fiber optic backbone, these four analog channels are sequentially multiplexed and demultiplexed to more closely emulate the conditions in existing onboard aircraft. We present data from measurements of optical power, transmission response (loss and gain), reflection response, group delay that defines phase distortion, signal-to-noise ratio (SNR), and dynamic range that defines nonlinear distortion. The data indicate that WDM is very suitable for avionics applications.

  14. The Application of Fiber Optic Wavelength Division Multiplexing in RF Avionics

    NASA Technical Reports Server (NTRS)

    Ngo, Duc; Nguyen, Hung; Atiquzzaman, Mohammed; Sluss, James J., Jr.; Refai, Hakki H.

    2004-01-01

    This paper demonstrates a successful application of wavelength division multiplexing (WDM) to the avionics environment to support analog RF signal transmission. We investigate the simultaneous transmission of four RF signals (channels) over a single optical fiber. These four analog channels are sequentially multiplexed and demultiplexed at different points along a fiber optic backbone to more closely emulate the conditions found onboard aircraft. We present data from measurements of signal-to-noise ratio (SNR), transmission response (loss and gain), group delay that defines phase distortion, and dynamic range that defines nonlinear distortion. The data indicate that WDM is well-suited for avionics applications.

  15. Analysis of Optimal Sequential State Discrimination for Linearly Independent Pure Quantum States.

    PubMed

    Namkung, Min; Kwon, Younghun

    2018-04-25

    Recently, J. A. Bergou et al. proposed sequential state discrimination as a new quantum state discrimination scheme. In the scheme, by the successful sequential discrimination of a qubit state, receivers Bob and Charlie can share the information of the qubit prepared by a sender Alice. A merit of the scheme is that a quantum channel is established between Bob and Charlie, but a classical communication is not allowed. In this report, we present a method for extending the original sequential state discrimination of two qubit states to a scheme of N linearly independent pure quantum states. Specifically, we obtain the conditions for the sequential state discrimination of N = 3 pure quantum states. We can analytically provide conditions when there is a special symmetry among N = 3 linearly independent pure quantum states. Additionally, we show that the scenario proposed in this study can be applied to quantum key distribution. Furthermore, we show that the sequential state discrimination of three qutrit states performs better than the strategy of probabilistic quantum cloning.

  16. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique

    PubMed Central

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine) through four different methods of synthesis – bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of −39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate. PMID:27555765

  17. Optimization, formulation, and characterization of multiflavonoids-loaded flavanosome by bulk or sequential technique.

    PubMed

    Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida

    2016-01-01

    This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising nanodrug delivery system for loading multiflavonoids in a single entity with sustained activity as an antioxidant, hepatoprotective, and hepatosupplement candidate.

  18. Optimal decision making on the basis of evidence represented in spike trains.

    PubMed

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  19. The impact of uncertainty on optimal emission policies

    NASA Astrophysics Data System (ADS)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  20. On the effect of response transformations in sequential parameter optimization.

    PubMed

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  1. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  2. Economic analysis of transmission line engineering based on industrial engineering

    NASA Astrophysics Data System (ADS)

    Li, Yixuan

    2017-05-01

    The modern industrial engineering is applied to the technical analysis and cost analysis of power transmission and transformation engineering. It can effectively reduce the cost of investment. First, the power transmission project is economically analyzed. Based on the feasibility study of power transmission and transformation project investment, the proposal on the company system cost management is put forward through the economic analysis of the effect of the system. The cost management system is optimized. Then, through the cost analysis of power transmission and transformation project, the new situation caused by the cost of construction is found. It is of guiding significance to further improve the cost management of power transmission and transformation project. Finally, according to the present situation of current power transmission project cost management, concrete measures to reduce the cost of power transmission project are given from the two aspects of system optimization and technology optimization.

  3. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  4. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  5. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  6. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  7. Latency and Persistence of 'Candidatus Liberibacter asiaticus' in Its Psyllid Vector, Diaphorina citri (Hemiptera: Liviidae).

    PubMed

    Canale, Maria Cristina; Tomaseto, Arthur Fernando; Haddad, Marineia de Lara; Della Coletta-Filho, Helvécio; Lopes, João Roberto Spotti

    2017-03-01

    Although 'Candidatus Liberibacter asiaticus' (Las) is a major pathogen associated with citrus huanglongbing (HLB), some characteristics of transmission by the psyllid vector Diaphorina citri are not fully understood. We examined the latent period and persistence of transmission of Las by D. citri in a series of experiments at 25°C, in which third-instar psyllid nymphs and 1-week-old adults were confined on infected citrus for an acquisition access period (AAP), and submitted to sequential inoculation access periods (IAPs) on healthy citrus seedlings. The median latent period (LP 50 , i.e., acquisition time after which 50% of the individuals can inoculate) of 16.8 and 17.8 days for psyllids that acquired Las as nymphs and adults, respectively, was determined by transferring single individuals in 48-h IAPs. Inoculation events were intermittent and randomly distributed over the IAPs, but were more frequent after acquisition by nymphs. A minimum latent period of 7 to 10 days was observed by transferring groups of 10 psyllids in 48-h IAPs, after a 96-h AAP by nymphs. Psyllids transmitted for up to 5 weeks, when submitted to sequential 1-week IAPs after a 14-day AAP as nymphs. The long latent period and persistence of transmission are indirect evidences of circulative propagation of Las in D. citri.

  8. Multi-Target Tracking via Mixed Integer Optimization

    DTIC Science & Technology

    2016-05-13

    solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially

  9. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  10. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  11. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  12. Multitrace/singletrace formulations and Domain Decomposition Methods for the solution of Helmholtz transmission problems for bounded composite scatterers

    NASA Astrophysics Data System (ADS)

    Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin

    2017-12-01

    We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.

  13. Sequential design of discrete linear quadratic regulators via optimal root-locus techniques

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar

    1989-01-01

    A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.

  14. Classical Civilization (Greece-Hellenistic-Rome). Teacher's Manual. 1968 Edition.

    ERIC Educational Resources Information Center

    Leppert, Ella C.; Smith, Rozella B.

    This secondary teachers guide builds upon a previous sequential course described in SO 003 173, and consists of three sections on the classical civilizations--Greek, Hellenistic, and Rome. Major emphasis is upon students gaining an understanding of cultural development and transmission. Using an analytic method, students learn to examine primary…

  15. Optimal temperature for malaria transmission is dramaticallylower than previously predicted

    USGS Publications Warehouse

    Mordecai, Eerin A.; Paaijmans, Krijin P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  16. Optimal temperature for malaria transmission is dramatically lower than previously predicted

    USGS Publications Warehouse

    Mordecai, Erin A.; Paaijmans, Krijn P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  17. Gate-Controlled Transmission of Quantum Hall Edge States in Bilayer Graphene.

    PubMed

    Li, Jing; Wen, Hua; Watanabe, Kenji; Taniguchi, Takashi; Zhu, Jun

    2018-02-02

    The edge states of the quantum Hall and fractional quantum Hall effect of a two-dimensional electron gas carry key information of the bulk excitations. Here we demonstrate gate-controlled transmission of edge states in bilayer graphene through a potential barrier with tunable height. The backscattering rate is continuously varied from 0 to close to 1, with fractional quantized values corresponding to the sequential complete backscattering of individual modes. Our experiments demonstrate the feasibility to controllably manipulate edge states in bilayer graphene, thus opening the door to more complex experiments.

  18. Gate-Controlled Transmission of Quantum Hall Edge States in Bilayer Graphene

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wen, Hua; Watanabe, Kenji; Taniguchi, Takashi; Zhu, Jun

    2018-02-01

    The edge states of the quantum Hall and fractional quantum Hall effect of a two-dimensional electron gas carry key information of the bulk excitations. Here we demonstrate gate-controlled transmission of edge states in bilayer graphene through a potential barrier with tunable height. The backscattering rate is continuously varied from 0 to close to 1, with fractional quantized values corresponding to the sequential complete backscattering of individual modes. Our experiments demonstrate the feasibility to controllably manipulate edge states in bilayer graphene, thus opening the door to more complex experiments.

  19. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  20. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  1. Aerostructural Shape and Topology Optimization of Aircraft Wings

    NASA Astrophysics Data System (ADS)

    James, Kai

    A series of novel algorithms for performing aerostructural shape and topology optimization are introduced and applied to the design of aircraft wings. An isoparametric level set method is developed for performing topology optimization of wings and other non-rectangular structures that must be modeled using a non-uniform, body-fitted mesh. The shape sensitivities are mapped to computational space using the transformation defined by the Jacobian of the isoparametric finite elements. The mapped sensitivities are then passed to the Hamilton-Jacobi equation, which is solved on a uniform Cartesian grid. The method is derived for several objective functions including mass, compliance, and global von Mises stress. The results are compared with SIMP results for several two-dimensional benchmark problems. The method is also demonstrated on a three-dimensional wingbox structure subject to fixed loading. It is shown that the isoparametric level set method is competitive with the SIMP method in terms of the final objective value as well as computation time. In a separate problem, the SIMP formulation is used to optimize the structural topology of a wingbox as part of a larger MDO framework. Here, topology optimization is combined with aerodynamic shape optimization, using a monolithic MDO architecture that includes aerostructural coupling. The aerodynamic loads are modeled using a three-dimensional panel method, and the structural analysis makes use of linear, isoparametric, hexahedral elements. The aerodynamic shape is parameterized via a set of twist variables representing the jig twist angle at equally spaced locations along the span of the wing. The sensitivities are determined analytically using a coupled adjoint method. The wing is optimized for minimum drag subject to a compliance constraint taken from a 2 g maneuver condition. The results from the MDO algorithm are compared with those of a sequential optimization procedure in order to quantify the benefits of the MDO approach. While the sequentially optimized wing exhibits a nearly-elliptical lift distribution, the MDO design seeks to push a greater portion of the load toward the root, thus reducing the structural deflection, and allowing for a lighter structure. By exploiting this trade-off, the MDO design achieves a 42% lower drag than the sequential result.

  2. Performance evaluation of an asynchronous multisensor track fusion filter

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.; Gray, John E.; McCabe, D. H.

    2003-08-01

    Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.

  3. Optimization of output power and transmission efficiency of magnetically coupled resonance wireless power transfer system

    NASA Astrophysics Data System (ADS)

    Yan, Rongge; Guo, Xiaoting; Cao, Shaoqing; Zhang, Changgeng

    2018-05-01

    Magnetically coupled resonance (MCR) wireless power transfer (WPT) system is a promising technology in electric energy transmission. But, if its system parameters are designed unreasonably, output power and transmission efficiency will be low. Therefore, optimized parameters design of MCR WPT has important research value. In the MCR WPT system with designated coil structure, the main parameters affecting output power and transmission efficiency are the distance between the coils, the resonance frequency and the resistance of the load. Based on the established mathematical model and the differential evolution algorithm, the change of output power and transmission efficiency with parameters can be simulated. From the simulation results, it can be seen that output power and transmission efficiency of the two-coil MCR WPT system and four-coil one with designated coil structure are improved. The simulation results confirm the validity of the optimization method for MCR WPT system with designated coil structure.

  4. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  5. Unusual behavior in magnesium-copper cluster matter produced by helium droplet mediated deposition.

    PubMed

    Emery, S B; Xin, Y; Ridge, C J; Buszek, R J; Boatz, J A; Boyle, J M; Little, B K; Lindsay, C M

    2015-02-28

    We demonstrate the ability to produce core-shell nanoclusters of materials that typically undergo intermetallic reactions using helium droplet mediated deposition. Composite structures of magnesium and copper were produced by sequential condensation of metal vapors inside the 0.4 K helium droplet baths and then gently deposited onto a substrate for analysis. Upon deposition, the individual clusters, with diameters ∼5 nm, form a cluster material which was subsequently characterized using scanning and transmission electron microscopies. Results of this analysis reveal the following about the deposited cluster material: it is in the un-alloyed chemical state, it maintains a stable core-shell 5 nm structure at sub-monolayer quantities, and it aggregates into unreacted structures of ∼75 nm during further deposition. Surprisingly, high angle annular dark field scanning transmission electron microscopy images revealed that the copper appears to displace the magnesium at the core of the composite cluster despite magnesium being the initially condensed species within the droplet. This phenomenon was studied further using preliminary density functional theory which revealed that copper atoms, when added sequentially to magnesium clusters, penetrate into the magnesium cores.

  6. Cross-layer Energy Optimization Under Image Quality Constraints for Wireless Image Transmissions.

    PubMed

    Yang, Na; Demirkol, Ilker; Heinzelman, Wendi

    2012-01-01

    Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layer design is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances.

  7. Enhancing sound absorption and transmission through flexible multi-layer micro-perforated structures.

    PubMed

    Bravo, Teresa; Maury, Cédric; Pinhède, Cédric

    2013-11-01

    Theoretical and experimental results are presented into the sound absorption and transmission properties of multi-layer structures made up of thin micro-perforated panels (ML-MPPs). The objective is to improve both the absorption and insulation performances of ML-MPPs through impedance boundary optimization. A fully coupled modal formulation is introduced that predicts the effect of the structural resonances onto the normal incidence absorption coefficient and transmission loss of ML-MPPs. This model is assessed against standing wave tube measurements and simulations based on impedance translation method for two double-layer MPP configurations of relevance in building acoustics and aeronautics. Optimal impedance relationships are proposed that ensure simultaneous maximization of both the absorption and the transmission loss under normal incidence. Exhaustive optimization of the double-layer MPPs is performed to assess the absorption and/or transmission performances with respect to the impedance criterion. It is investigated how the panel volumetric resonances modify the excess dissipation that can be achieved from non-modal optimization of ML-MPPs.

  8. Aircraft Trajectories Computation-Prediction-Control. Volume 1 (La Trajectoire de l’Avion Calcul-Prediction-Controle)

    DTIC Science & Technology

    1990-03-01

    knowledge covering problems of this type is called calculus of variations or optimal control theory (Refs. 1-8). As stated before, appli - cations occur...to the optimality conditions and the feasibility equations of Problem (GP), respectively. Clearly, after the transformation (26) is applied , the...trajectories, the primal sequential gradient-restoration algorithm (PSGRA) is applied to compute optimal trajectories for aeroassisted orbital transfer

  9. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  10. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  11. A Sequential Shifting Algorithm for Variable Rotor Speed Control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Edwards, Jason M.; DeCastro, Jonathan A.

    2007-01-01

    A proof of concept of a continuously variable rotor speed control methodology for rotorcraft is described. Variable rotor speed is desirable for several reasons including improved maneuverability, agility, and noise reduction. However, it has been difficult to implement because turboshaft engines are designed to operate within a narrow speed band, and a reliable drive train that can provide continuous power over a wide speed range does not exist. The new methodology proposed here is a sequential shifting control for twin-engine rotorcraft that coordinates the disengagement and engagement of the two turboshaft engines in such a way that the rotor speed may vary over a wide range, but the engines remain within their prescribed speed bands and provide continuous torque to the rotor; two multi-speed gearboxes facilitate the wide rotor speed variation. The shifting process begins when one engine slows down and disengages from the transmission by way of a standard freewheeling clutch mechanism; the other engine continues to apply torque to the rotor. Once one engine disengages, its gear shifts, the multi-speed gearbox output shaft speed resynchronizes and it re-engages. This process is then repeated with the other engine. By tailoring the sequential shifting, the rotor may perform large, rapid speed changes smoothly, as demonstrated in several examples. The emphasis of this effort is on the coordination and control aspects for proof of concept. The engines, rotor, and transmission are all simplified linear models, integrated to capture the basic dynamics of the problem.

  12. Progressive data transmission for anatomical landmark detection in a cloud.

    PubMed

    Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D

    2012-01-01

    In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.

  13. Optimal fiber design for large capacity long haul coherent transmission [Invited].

    PubMed

    Hasegawa, Takemi; Yamamoto, Yoshinori; Hirano, Masaaki

    2017-01-23

    Fiber figure of merit (FOM), derived from the GN-model theory and validated by several experiments, can predict improvement in OSNR or transmission distance using advanced fibers. We review the FOM theory and present design results of optimal fiber for large capacity long haul transmission, showing variation in design results according to system configuration.

  14. Auctions with Dynamic Populations: Efficiency and Revenue Maximization

    NASA Astrophysics Data System (ADS)

    Said, Maher

    We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.

  15. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  16. Improvement of attenuation correction in time-of-flight PET/MR imaging with a positron-emitting source.

    PubMed

    Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan

    2014-02-01

    Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus-shaped transmission source will be more accurate than a conventional MR-based technique for measuring attenuation coefficients at 511 keV in future whole-body PET/MR studies.

  17. Sequential Combination of Electro-Fenton and Electrochemical Chlorination Processes for the Treatment of Anaerobically-Digested Food Wastewater.

    PubMed

    Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang

    2017-09-19

    A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.

  18. Subwavelength elastic joints connecting torsional waveguides to maximize the power transmission coefficient

    NASA Astrophysics Data System (ADS)

    Lee, Joong Seok; Lee, Il Kyu; Seung, Hong Min; Lee, Jun Kyu; Kim, Yoon Young

    2017-03-01

    Joints with slowly varying tapered shapes, such as linear or exponential profiles, are known to transmit incident wave power efficiently between two waveguides with dissimilar impedances. This statement is valid only when the considered joint length is longer than the wavelengths of the incident waves. When the joint length is shorter than the wavelengths, however, appropriate shapes of such subwavelength joints for efficient power transmission have not been explored much. In this work, considering one-dimensional torsional wave motion in a cylindrical elastic waveguide system, optimal shapes or radial profiles of a subwavelength joint maximizing the power transmission coefficient are designed by a gradient-based optimization formulation. The joint is divided into a number of thin disk elements using the transfer matrix approach and optimal radii of the disks are determined by iterative shape optimization processes for several single or bands of wavenumbers. Due to the subwavelength constraint, the optimized joint profiles were found to be considerably different from the slowly varying tapered shapes. Specifically, for bands of wavenumbers, peculiar gourd-like shapes were obtained as optimal shapes to maximize the power transmission coefficient. Numerical results from the proposed optimization formulation were also experimentally realized to verify the validity of the present designs.

  19. Turning EGFR mutation-positive non-small-cell lung cancer into a chronic disease: optimal sequential therapy with EGFR tyrosine kinase inhibitors

    PubMed Central

    Hirsh, Vera

    2018-01-01

    Four epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKIs), erlotinib, gefitinib, afatinib and osimertinib, are currently available for the management of EGFR mutation-positive non-small-cell lung cancer (NSCLC), with others in development. Although tumors are exquisitely sensitive to these agents, acquired resistance is inevitable. Furthermore, emerging data indicate that first- (erlotinib and gefitinib), second- (afatinib) and third-generation (osimertinib) EGFR TKIs differ in terms of efficacy and tolerability profiles. Therefore, there is a strong imperative to optimize the sequence of TKIs in order to maximize their clinical benefit. Osimertinib has demonstrated striking efficacy as a second-line treatment option in patients with T790M-positive tumors, and also confers efficacy and tolerability advantages over first-generation TKIs in the first-line setting. However, while accrual of T790M is the most predominant mechanism of resistance to erlotinib, gefitinib and afatinib, resistance mechanisms to osimertinib have not been clearly elucidated, meaning that possible therapy options after osimertinib failure are not clear. At present, few data comparing sequential regimens in patients with EGFR mutation-positive NSCLC are available and prospective clinical trials are required. This article reviews the similarities and differences between EGFR TKIs, and discusses key considerations when assessing optimal sequential therapy with these agents for the treatment of EGFR mutation-positive NSCLC. PMID:29383041

  20. Optimal design of a main driving mechanism for servo punch press based on performance atlases

    NASA Astrophysics Data System (ADS)

    Zhou, Yanhua; Xie, Fugui; Liu, Xinjun

    2013-09-01

    The servomotor drive turret punch press is attracting more attentions and being developed more intensively due to the advantages of high speed, high accuracy, high flexibility, high productivity, low noise, cleaning and energy saving. To effectively improve the performance and lower the cost, it is necessary to develop new mechanisms and establish corresponding optimal design method with uniform performance indices. A new patented main driving mechanism and a new optimal design method are proposed. In the optimal design, the performance indices, i.e., the local motion/force transmission indices ITI, OTI, good transmission workspace good transmission workspace(GTW) and the global transmission indices GTIs are defined. The non-dimensional normalization method is used to get all feasible solutions in dimensional synthesis. Thereafter, the performance atlases, which can present all possible design solutions, are depicted. As a result, the feasible solution of the mechanism with good motion/force transmission performance is obtained. And the solution can be flexibly adjusted by designer according to the practical design requirements. The proposed mechanism is original, and the presented design method provides a feasible solution to the optimal design of the main driving mechanism for servo punch press.

  1. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.

    The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less

  2. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin

    The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less

  3. Smart house-based optimal operation of thermal unit commitment for a smart grid considering transmission constraints

    NASA Astrophysics Data System (ADS)

    Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu

    2018-05-01

    This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.

  4. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  5. A Method for Optimizing Lightweight-Gypsum Design Based on Sequential Measurements of Physical Parameters

    NASA Astrophysics Data System (ADS)

    Vimmrová, Alena; Kočí, Václav; Krejsová, Jitka; Černý, Robert

    2016-06-01

    A method for lightweight-gypsum material design using waste stone dust as the foaming agent is described. The main objective is to reach several physical properties which are inversely related in a certain way. Therefore, a linear optimization method is applied to handle this task systematically. The optimization process is based on sequential measurement of physical properties. The results are subsequently point-awarded according to a complex point criterion and new composition is proposed. After 17 trials the final mixture is obtained, having the bulk density equal to (586 ± 19) kg/m3 and compressive strength (1.10 ± 0.07) MPa. According to a detailed comparative analysis with reference gypsum, the newly developed material can be used as excellent thermally insulating interior plaster with the thermal conductivity of (0.082 ± 0.005) W/(m·K). In addition, its practical application can bring substantial economic and environmental benefits as the material contains 25 % of waste stone dust.

  6. Applications of colored petri net and genetic algorithms to cluster tool scheduling

    NASA Astrophysics Data System (ADS)

    Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng

    2005-12-01

    In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.

  7. Research on optimal investment path of transmission corridor under the global energy Internet

    NASA Astrophysics Data System (ADS)

    Huang, Yuehui; Li, Pai; Wang, Qi; Liu, Jichun; Gao, Han

    2018-02-01

    Under the background of the global energy Internet, the investment planning of transmission corridor from XinJiang to Germany is studied in this article, which passes through four countries: Kazakhstan, Russia, Belarus and Poland. Taking the specific situation of different countries into account, including the length of transmission line, unit construction cost, completion time, transmission price, state tariff, inflation rate and so on, this paper constructed a power transmission investment model. Finally, the dynamic programming method is used to simulate the example, and the optimal strategies under different objective functions are obtained.

  8. Antibacterial serine protease from Wrightia tinctoria: Purification and characterization.

    PubMed

    Muthu, Sakthivel; Gopal, Venkatesh Babu; Soundararajan, Selvakumar; Nattarayan, Karthikeyan; S Narayan, Karthik; Lakshmikanthan, Mythileeswari; Malairaj, Sathuvan; Perumal, Palani

    2017-03-01

    A serine protease was purified from the leaves of Wrightia tinctoria by sequential flow through method comprising screening, optimization, ammonium sulfate precipitation, gel filtration and ion exchange column chromatography. The yield and purification fold obtained were 11.58% and 9.56 respectively. A single band of serine protease was visualized on SDS-PAGE and 2-D gel electrophoretic analyses were revealed with the molecular mass of 38.5 kDa. Serine protease had an optimum pH of 8.0 and was stable at 45°C with high relative protease activity. The addition of metal ions such as Mg2+ and Mn2+ exhibits a high relative activity. Serine protease had a potent antibacterial activity against both Gram-positive and Gram-negative bacteria. A 10 μg/ml of serine protease was tested against S. aureus, M. luteus, P. aeruginosa and K. pneumoniae which had 21, 20, 18 and 17 mm of zone of inhibition respectively. Serine protease from W. tinctoria degrades the peptidoglycan layer of bacteria which was visualized by transmission electron microscopic analysis. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  9. Polarimetric Multispectral Imaging Technology

    NASA Technical Reports Server (NTRS)

    Cheng, L.-J.; Chao, T.-H.; Dowdy, M.; Mahoney, C.; Reyes, G.

    1993-01-01

    The Jet Propulsion Laboratory is developing a remote sensing technology on which a new generation of compact, lightweight, high-resolution, low-power, reliable, versatile, programmable scientific polarimetric multispectral imaging instruments can be built to meet the challenge of future planetary exploration missions. The instrument is based on the fast programmable acousto-optic tunable filter (AOTF) of tellurium dioxide (TeO2) that operates in the wavelength range of 0.4-5 microns. Basically, the AOTF multispectral imaging instrument measures incoming light intensity as a function of spatial coordinates, wavelength, and polarization. Its operation can be in either sequential, random access, or multiwavelength mode as required. This provides observation flexibility, allowing real-time alternation among desired observations, collecting needed data only, minimizing data transmission, and permitting implementation of new experiments. These will result in optimization of the mission performance with minimal resources. Recently we completed a polarimetric multispectral imaging prototype instrument and performed outdoor field experiments for evaluating application potentials of the technology. We also investigated potential improvements on AOTF performance to strengthen technology readiness for applications. This paper will give a status report on the technology and a prospect toward future planetary exploration.

  10. Sequential acquisition of Potato virus Y strains by Myzus persicae favors the transmission of the emerging recombinant strains

    USDA-ARS?s Scientific Manuscript database

    In the past decade recombinant strains of potato virus Y (PVY) have overtaken the ordinary strain, PVYO, as the predominant viruses affecting the US seed potato crop. Aphids may be a contributing factor in the emergence of the recombinant strains, but studies indicate that differences in transmissio...

  11. Dopamine reward prediction-error signalling: a two-component response

    PubMed Central

    Schultz, Wolfram

    2017-01-01

    Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020

  12. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  13. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  14. The expression of virulence during double infections by different parasites with conflicting host exploitation and transmission strategies.

    PubMed

    Ben-Ami, F; Rigaud, T; Ebert, D

    2011-06-01

    In many natural populations, hosts are found to be infected by more than one parasite species. When these parasites have different host exploitation strategies and transmission modes, a conflict among them may arise. Such a conflict may reduce the success of both parasites, but could work to the benefit of the host. For example, the less-virulent parasite may protect the host against the more-virulent competitor. We examine this conflict using the waterflea Daphnia magna and two of its sympatric parasites: the blood-infecting bacterium Pasteuria ramosa that transmits horizontally and the intracellular microsporidium Octosporea bayeri that can concurrently transmit horizontally and vertically after infecting ovaries and fat tissues of the host. We quantified host and parasite fitness after exposing Daphnia to one or both parasites, both simultaneously and sequentially. Under conditions of strict horizontal transmission, Pasteuria competitively excluded Octosporea in both simultaneous and sequential double infections, regardless of the order of exposure. Host lifespan, host reproduction and parasite spore production in double infections resembled those of single infection by Pasteuria. When hosts became first vertically (transovarilly) infected with O. bayeri, Octosporea was able to withstand competition with P. ramosa to some degree, but both parasites produced less transmission stages than they did in single infections. At the same time, the host suffered from reduced fecundity and longevity. Our study demonstrates that even when competing parasite species utilize different host tissues to proliferate, double infections lead to the expression of higher virulence and ultimately may select for higher virulence. Furthermore, we found no evidence that the less-virulent and vertically transmitting O. bayeri protects its host against the highly virulent P. ramosa. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.

  15. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  16. Optimal startup control of a jacketed tubular reactor.

    NASA Technical Reports Server (NTRS)

    Hahn, D. R.; Fan, L. T.; Hwang, C. L.

    1971-01-01

    The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.

  17. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  18. Decision Aids for Naval Air ASW

    DTIC Science & Technology

    1980-03-15

    Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A

  19. A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.

    PubMed

    Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W

    2002-01-01

    In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.

  20. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  1. Unusual behavior in magnesium-copper cluster matter produced by helium droplet mediated deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emery, S. B., E-mail: samuel.emery@navy.mil; Little, B. K.; Air Force Research Laboratory, Munitions Directorate, 2306 Perimeter Rd., Eglin AFB, Florida 32542

    2015-02-28

    We demonstrate the ability to produce core-shell nanoclusters of materials that typically undergo intermetallic reactions using helium droplet mediated deposition. Composite structures of magnesium and copper were produced by sequential condensation of metal vapors inside the 0.4 K helium droplet baths and then gently deposited onto a substrate for analysis. Upon deposition, the individual clusters, with diameters ∼5 nm, form a cluster material which was subsequently characterized using scanning and transmission electron microscopies. Results of this analysis reveal the following about the deposited cluster material: it is in the un-alloyed chemical state, it maintains a stable core-shell 5 nm structuremore » at sub-monolayer quantities, and it aggregates into unreacted structures of ∼75 nm during further deposition. Surprisingly, high angle annular dark field scanning transmission electron microscopy images revealed that the copper appears to displace the magnesium at the core of the composite cluster despite magnesium being the initially condensed species within the droplet. This phenomenon was studied further using preliminary density functional theory which revealed that copper atoms, when added sequentially to magnesium clusters, penetrate into the magnesium cores.« less

  2. Visualizing non-equilibrium lithiation of spinel oxide via in situ transmission electron microscopy

    DOE PAGES

    He, Kai; Zhang, Sen; Li, Jing; ...

    2016-05-09

    In this study, spinel transition metal oxides are an important class of materials that are being considered as electrodes for lithium-ion batteries, due to their low cost and high theoretical capacity. The lithiation of these compounds is known to undergo a two-step reaction, whereby intercalation and conversion occur in a sequential fashion. These two reactions are known to have distinct reaction dynamics, but it is unclear how the kinetics of these processes affect the overall electrochemical response. Here, we explore the lithiation of nanosized magnetite (F e3O 4) by employing a new strain-sensitive, bright-field scanning transmission electron microscopy approach.

  3. Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio

    2011-12-01

    This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).

  4. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  5. A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks.

    PubMed

    Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen

    2016-10-01

    In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime.

  6. A Power-Optimized Cooperative MAC Protocol for Lifetime Extension in Wireless Sensor Networks

    PubMed Central

    Liu, Kai; Wu, Shan; Huang, Bo; Liu, Feng; Xu, Zhen

    2016-01-01

    In wireless sensor networks, in order to satisfy the requirement of long working time of energy-limited nodes, we need to design an energy-efficient and lifetime-extended medium access control (MAC) protocol. In this paper, a node cooperation mechanism that one or multiple nodes with higher channel gain and sufficient residual energy help a sender relay its data packets to its recipient is employed to achieve this objective. We first propose a transmission power optimization algorithm to prolong network lifetime by optimizing the transmission powers of the sender and its cooperative nodes to maximize their minimum residual energy after their data packet transmissions. Based on it, we propose a corresponding power-optimized cooperative MAC protocol. A cooperative node contention mechanism is designed to ensure that the sender can effectively select a group of cooperative nodes with the lowest energy consumption and the best channel quality for cooperative transmissions, thus further improving the energy efficiency. Simulation results show that compared to typical MAC protocol with direct transmissions and energy-efficient cooperative MAC protocol, the proposed cooperative MAC protocol can efficiently improve the energy efficiency and extend the network lifetime. PMID:27706079

  7. Ultra-low-loss tapered optical fibers with minimal lengths

    NASA Astrophysics Data System (ADS)

    Nagai, Ryutaro; Aoki, Takao

    2014-11-01

    We design and fabricate ultra-low-loss tapered optical fibers (TOFs) with minimal lengths. We first optimize variations of the torch scan length using the flame-brush method for fabricating TOFs with taper angles that satisfy the adiabaticity criteria. We accordingly fabricate TOFs with optimal shapes and compare their transmission to TOFs with a constant taper angle and TOFs with an exponential shape. The highest transmission measured for TOFs with an optimal shape is in excess of 99.7 % with a total TOF length of only 23 mm, whereas TOFs with a constant taper angle of 2 mrad reach 99.6 % transmission for a 63 mm TOF length.

  8. Sequential inference as a mode of cognition and its correlates in fronto-parietal and hippocampal brain regions

    PubMed Central

    Friston, Karl J.; Dolan, Raymond J.

    2017-01-01

    Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates. PMID:28486504

  9. Short-Range Temporal Interactions in Sleep; Hippocampal Spike Avalanches Support a Large Milieu of Sequential Activity Including Replay

    PubMed Central

    Mahoney, J. Matthew; Titiz, Ali S.; Hernan, Amanda E.; Scott, Rod C.

    2016-01-01

    Hippocampal neural systems consolidate multiple complex behaviors into memory. However, the temporal structure of neural firing supporting complex memory consolidation is unknown. Replay of hippocampal place cells during sleep supports the view that a simple repetitive behavior modifies sleep firing dynamics, but does not explain how multiple episodes could be integrated into associative networks for recollection during future cognition. Here we decode sequential firing structure within spike avalanches of all pyramidal cells recorded in sleeping rats after running in a circular track. We find that short sequences that combine into multiple long sequences capture the majority of the sequential structure during sleep, including replay of hippocampal place cells. The ensemble, however, is not optimized for maximally producing the behavior-enriched episode. Thus behavioral programming of sequential correlations occurs at the level of short-range interactions, not whole behavioral sequences and these short sequences are assembled into a large and complex milieu that could support complex memory consolidation. PMID:26866597

  10. Hybrid Model Predictive Control for Sequential Decision Policies in Adaptive Behavioral Interventions.

    PubMed

    Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S

    2014-06-01

    Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.

  11. Does high optimism protect against the inter-generational transmission of high BMI? The Cardiovascular Risk in Young Finns Study.

    PubMed

    Serlachius, Anna; Pulkki-Råback, Laura; Juonala, Markus; Sabin, Matthew; Lehtimäki, Terho; Raitakari, Olli; Elovainio, Marko

    2017-09-01

    The transmission of overweight from one generation to the next is well established, however little is known about what psychosocial factors may protect against this familial risk. The aim of this study was to examine whether optimism plays a role in the intergenerational transmission of obesity. Our sample included 1043 participants from the prospective Cardiovascular Risk in Young FINNS Study. Optimism was measured in early adulthood (2001) when the cohort was aged 24-39years. BMI was measured in 2001 (baseline) and 2012 when they were aged 35-50years. Parental BMI was measured in 1980. Hierarchical linear regression and logistic regression were used to examine the association between optimism and future BMI/obesity, and whether an interaction existed between optimism and parental BMI when predicting BMI/obesity 11years later. High optimism in young adulthood demonstrated a negative relationship with high BMI in mid-adulthood, but only in women (β=-0.127, p=0.001). The optimism×maternal BMI interaction term was a significant predictor of future BMI in women (β=-0.588, p=0.036). The logistic regression results confirmed that high optimism predicted reduced obesity in women (OR=0.68, 95% CI, 0.55-0.86), however the optimism × maternal obesity interaction term was not a significant predictor (OR=0.50, 95% CI, 0.10-2.48). Our findings supported our hypothesis that high optimism mitigated the intergenerational transmission of high BMI, but only in women. These findings also provided evidence that positive psychosocial factors such as optimism are associated with long-term protective effects on BMI in women. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Optimized OFDM Transmission of Encrypted Image Over Fading Channel

    NASA Astrophysics Data System (ADS)

    Eldin, Salwa M. Serag

    2014-11-01

    This paper compares the quality of diffusion-based and permutation-based encrypted image transmission using orthogonal frequency division multiplexing (OFDM) over wireless fading channel. Sensitivity to carrier frequency offsets (CFOs) is one of the limitations in OFDM transmission that was compensated here. Different OFDM diffusions are investigated to study encrypted image transmission optimization. Peak signal-to-noise ratio between the original image and the decrypted image is used to evaluate the received image quality. Chaotic encrypted image modulated with CFOs compensated FFT-OFDM was found to give outstanding performance against other encryption and modulation techniques.

  13. Transmission congestion management in the electricity market

    NASA Astrophysics Data System (ADS)

    Chen, Yue

    2018-04-01

    In this paper we mainly discuss how to optimize the arrangement to decrease the loss of each line when the power generation side of the system transmission congestion occurs in a safe and economical manner. We respectively set the adjust model if the transmission can be eliminated which can calculate the best scheme and safety margin model when transmission cannot be eliminated which is a multi-objective planning problem. We solve the two models on the condition of the load power demands are 982.4MW and 1052.8 MW by Lingo and get the optimal management scheme.

  14. Transient photocurrent in molecular junctions: singlet switching on and triplet blocking.

    PubMed

    Petrov, E G; Leonov, V O; Snitsarev, V

    2013-05-14

    The kinetic approach adapted to describe charge transmission in molecular junctions, is used for the analysis of the photocurrent under conditions of moderate light intensity of the photochromic molecule. In the framework of the HOMO-LUMO model for the single electron molecular states, the analytic expressions describing the temporary behavior of the transient and steady state sequential (hopping) as well as direct (tunnel) current components have been derived. The conditions at which the current components achieve their maximal values are indicated. It is shown that if the rates of charge transmission in the unbiased molecular diode are much lower than the intramolecular singlet-singlet excitation/de-excitation rate, and the threefold degenerated triplet excited state of the molecule behaves like a trap blocking the charge transmission, a possibility of a large peak-like transient switch-on photocurrent arises.

  15. Optimization of the High-speed On-off Valve of an Automatic Transmission

    NASA Astrophysics Data System (ADS)

    Li-mei, ZHAO; Huai-chao, WU; Lei, ZHAO; Yun-xiang, LONG; Guo-qiao, LI; Shi-hao, TANG

    2018-03-01

    The response time of the high-speed on-off solenoid valve has a great influence on the performance of the automatic transmission. In order to reduce the response time of the high-speed on-off valve, the simulation model of the valve was built by use of AMESim and Ansoft Maxwell softwares. To reduce the response time, an objective function based on ITAE criterion was built and the Genetic Algorithms was used to optimize five parameters including circle number, working air gap, et al. The comparison between experiment and simulation shows that the model is verified. After optimization, the response time of the valve is reduced by 38.16%, the valve can meet the demands of the automatic transmission well. The results can provide theoretical reference for the improvement of automatic transmission performance.

  16. Optimal integer resolution for attitude determination using global positioning system signals

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn

    1998-01-01

    In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.

  17. Revisiting Bevacizumab + Cytotoxics Scheduling Using Mathematical Modeling: Proof of Concept Study in Experimental Non-Small Cell Lung Carcinoma.

    PubMed

    Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien

    2018-01-01

    Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  18. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  19. Optimal pacing modes after cardiac transplantation: is synchronisation of recipient and donor atria beneficial?

    PubMed Central

    Parry, Gareth; Malbut, Katie; Dark, John H; Bexton, Rodney S

    1992-01-01

    Objective—To investigate the response of the transplanted heart to different pacing modes and to synchronisation of the recipient and donor atria in terms of cardiac output at rest. Design—Doppler derived cardiac output measurements at three pacing rates (90/min, 110/min and 130/min) in five pacing modes: right ventricular pacing, donor atrial pacing, recipient-donor synchronous pacing, donor atrial-ventricular sequential pacing, and synchronous recipient-donor atrial-ventricular sequential pacing. Patients—11 healthy cardiac transplant recipients with three pairs of epicardial leads inserted at transplantation. Results—Donor atrial pacing (+11% overall) and donor atrial-ventricular sequential pacing (+8% overall) were significantly better than right ventricular pacing (p < 0·001) at all pacing rates. Synchronised pacing of recipient and donor atrial segments did not confer additional benefit in either atrial or atrial-ventricular sequential modes of pacing in terms of cardiac output at rest at these fixed rates. Conclusions—Atrial pacing or atrial-ventricular sequential pacing appear to be appropriate modes in cardiac transplant recipients. Synchronisation of recipient and donor atrial segments in this study produced no additional benefit. Chronotropic competence in these patients may, however, result in improved exercise capacity and deserves further investigation. PMID:1389737

  20. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    EPA Science Inventory

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  2. Downlink Cooperative Broadcast Transmission Based on Superposition Coding in a Relaying System for Future Wireless Sensor Networks.

    PubMed

    Liu, Yang; Han, Guangjie; Shi, Sulong; Li, Zhengquan

    2018-06-20

    This study investigates the superiority of cooperative broadcast transmission over traditional orthogonal schemes when applied in a downlink relaying broadcast channel (RBC). Two proposed cooperative broadcast transmission protocols, one with an amplify-and-forward (AF) relay, and the other with a repetition-based decode-and-forward (DF) relay, are investigated. By utilizing superposition coding (SupC), the source and the relay transmit the private user messages simultaneously instead of sequentially as in traditional orthogonal schemes, which means the channel resources are reused and an increased channel degree of freedom is available to each user, hence the half-duplex penalty of relaying is alleviated. To facilitate a performance evaluation, theoretical outage probability expressions of the two broadcast transmission schemes are developed, based on which, we investigate the minimum total power consumption of each scheme for a given traffic requirement by numerical simulation. The results provide details on the overall system performance and fruitful insights on the essential characteristics of cooperative broadcast transmission in RBCs. It is observed that better overall outage performances and considerable power gains can be obtained by utilizing cooperative broadcast transmissions compared to traditional orthogonal schemes.

  3. SeGRAm - A practical and versatile tool for spacecraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    Rishikof, Brian H.; Mccormick, Bernell R.; Pritchard, Robert E.; Sponaugle, Steven J.

    1991-01-01

    An implementation of the Sequential Gradient/Restoration Algorithm, SeGRAm, is presented along with selected examples. This spacecraft trajectory optimization and simulation program uses variational calculus to solve problems of spacecraft flying under the influence of one or more gravitational bodies. It produces a series of feasible solutions to problems involving a wide range of vehicles, environments and optimization functions, until an optimal solution is found. The examples included highlight the various capabilities of the program and emphasize in particular its versatility over a wide spectrum of applications from ascent to interplanetary trajectories.

  4. Procedures for shape optimization of gas turbine disks

    NASA Technical Reports Server (NTRS)

    Cheu, Tsu-Chien

    1989-01-01

    Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.

  5. Host Adaptation of Soybean Dwarf Virus Following Serial Passages on Pea (Pisum sativum) and Soybean (Glycine max)

    PubMed Central

    Tian, Bin; Gildow, Frederick E.; Stone, Andrew L.; Sherman, Diana J.; Damsteegt, Vernon D.; Schneider, William L.

    2017-01-01

    Soybean Dwarf Virus (SbDV) is an important plant pathogen, causing economic losses in soybean. In North America, indigenous strains of SbDV mainly infect clover, with occasional outbreaks in soybean. To evaluate the risk of a US clover strain of SbDV adapting to other plant hosts, the clover isolate SbDV-MD6 was serially transmitted to pea and soybean by aphid vectors. Sequence analysis of SbDV-MD6 from pea and soybean passages identified 11 non-synonymous mutations in soybean, and six mutations in pea. Increasing virus titers with each sequential transmission indicated that SbDV-MD6 was able to adapt to the plant host. However, aphid transmission efficiency on soybean decreased until the virus was no longer transmissible. Our results clearly demonstrated that the clover strain of SbDV-MD6 is able to adapt to soybean crops. However, mutations that improve replication and/or movement may have trade-off effects resulting in decreased vector transmission. PMID:28635666

  6. Sequential Optimization Methods for Augmentation of Marine Enzymes Production in Solid-State Fermentation: l-Glutaminase Production a Case Study.

    PubMed

    Sathish, T; Uppuluri, K B; Veera Bramha Chari, P; Kezia, D

    There is an increased l-glutaminase market worldwide due to its relevant industrial applications. Salt tolerance l-glutaminases play a vital role in the increase of flavor of different types of foods like soya sauce and tofu. This chapter is presenting the economically viable l-glutaminases production in solid-state fermentation (SSF) by Aspergillus flavus MTCC 9972 as a case study. The enzyme production was improved following a three step optimization process. Initially mixture design (MD) (augmented simplex lattice design) was employed to optimize the solid substrate mixture. Such solid substrate mixture consisted of 59:41 of wheat bran and Bengal gram husk has given higher amounts of l-glutaminase. Glucose and l-glutamine were screened as a finest additional carbon and nitrogen sources for l-glutaminase production with help of Plackett-Burman Design (PBD). l-Glutamine also acting as a nitrogen source as well as inducer for secretion of l-glutaminase from A. flavus MTCC 9972. In the final step of optimization various environmental and nutritive parameters such as pH, temperature, moisture content, inoculum concentration, glucose, and l-glutamine levels were optimized through the use of hybrid feed forward neural networks (FFNNs) and genetic algorithm (GA). Through sequential optimization methods MD-PBD-FFNN-GA, the l-glutaminase production in SSF could be improved by 2.7-fold (453-1690U/g). © 2016 Elsevier Inc. All rights reserved.

  7. Privatization and subsidization in a leadership duopoly

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.

    2017-07-01

    In this paper, we consider a competition in both mixed and privatized markets, in which the firms set prices in a sequential way. We study the effects of optimal production subsidies in both mixed and privatized duopoly.

  8. Sequential estimation and satellite data assimilation in meteorology and oceanography

    NASA Technical Reports Server (NTRS)

    Ghil, M.

    1986-01-01

    The central theme of this review article is the role that dynamics plays in estimating the state of the atmosphere and of the ocean from incomplete and noisy data. Objective analysis and inverse methods represent an attempt at relying mostly on the data and minimizing the role of dynamics in the estimation. Four-dimensional data assimilation tries to balance properly the roles of dynamical and observational information. Sequential estimation is presented as the proper framework for understanding this balance, and the Kalman filter as the ideal, optimal procedure for data assimilation. The optimal filter computes forecast error covariances of a given atmospheric or oceanic model exactly, and hence data assimilation should be closely connected with predictability studies. This connection is described, and consequences drawn for currently active areas of the atmospheric and oceanic sciences, namely, mesoscale meteorology, medium and long-range forecasting, and upper-ocean dynamics.

  9. Optimal medication dosing from suboptimal clinical examples: a deep reinforcement learning approach.

    PubMed

    Nemati, Shamim; Ghassemi, Mohammad M; Clifford, Gari D

    2016-08-01

    Misdosing medications with sensitive therapeutic windows, such as heparin, can place patients at unnecessary risk, increase length of hospital stay, and lead to wasted hospital resources. In this work, we present a clinician-in-the-loop sequential decision making framework, which provides an individualized dosing policy adapted to each patient's evolving clinical phenotype. We employed retrospective data from the publicly available MIMIC II intensive care unit database, and developed a deep reinforcement learning algorithm that learns an optimal heparin dosing policy from sample dosing trails and their associated outcomes in large electronic medical records. Using separate training and testing datasets, our model was observed to be effective in proposing heparin doses that resulted in better expected outcomes than the clinical guidelines. Our results demonstrate that a sequential modeling approach, learned from retrospective data, could potentially be used at the bedside to derive individualized patient dosing policies.

  10. Optimal trajectories for aeroassisted orbital transfer

    NASA Technical Reports Server (NTRS)

    Miele, A.; Venkataraman, P.

    1983-01-01

    Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.

  11. Sequential vs. simultaneous photokilling by mitochondrial and lysosomal photodamage

    NASA Astrophysics Data System (ADS)

    Kessel, David

    2017-02-01

    We previously reported that a low level of lysosomal photoda mage can markedly promote the subsequent efficacy of PDT directed at mitochondria. This involves release of Ca2+ from photo damaged lysosomes, cleavage of the autophagy-associated protein ATG5 after activation of calpain and an interaction between the ATG5 fragment and mitochondria resulting in enhanced apoptosis. Inhibition of calpain activity abolished th is effect. We examined permissible irradiation sequences. Lysosomal photodamage must occur first with the `enhancement' effect showing a short half-life ( 15 min), presumably reflecting the survival of the ATG5 fragment. Simultaneous photo damage to both loci was found to be as effective as the sequential protocol. Since Photofrin can target both lysosomes and mitochondria for photo damage, this broad spectrum of photo damage may explain the efficacy of this photo sensitizing agent in spite of a sub-optimal absorbance profile at a sub- optimal wavelength for tissue transparency.

  12. Genetic algorithm optimization of transcutaneous energy transmission systems for implantable ventricular assist devices.

    PubMed

    Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori

    2013-01-01

    Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.

  13. Optimal Control of Malaria Transmission using Insecticide Treated Nets and Spraying

    NASA Astrophysics Data System (ADS)

    Athina, D.; Bakhtiar, T.; Jaharuddin

    2017-03-01

    In this paper, we consider a model of the transmission of malaria which was developed by Silva and Torres equipped with two control variables, namely the use of insecticide treated nets (ITN) to reduce the number of human beings infected and spraying to reduce the number of mosquitoes. Pontryagin maximum principle was applied to derive the differential equation system as optimality conditions which must be satisfied by optimal control variables. The Mangasarian sufficiency theorem shows that Pontryagin maximum principle is necessary as well as sufficient conditions for optimization problem. The 4th-order Runge Kutta method was then performed to solve the differential equations system. The numerical results show that both controls given at once can reduce the number of infected individuals as well as the number of mosquitoes which reduce the impact of malaria transmission.

  14. Optimizing your options: Extracting the full economic value of transmission when planning under uncertainty

    DOE PAGES

    Munoz, Francisco D.; Watson, Jean -Paul; Hobbs, Benjamin F.

    2015-06-04

    In this study, the anticipated magnitude of needed investments in new transmission infrastructure in the U.S. requires that these be allocated in a way that maximizes the likelihood of achieving society's goals for power system operation. The use of state-of-the-art optimization tools can identify cost-effective investment alternatives, extract more benefits out of transmission expansion portfolios, and account for the huge economic, technology, and policy uncertainties that the power sector faces over the next several decades.

  15. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    PubMed

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Helium and deuterium irradiation effects in W-Ta composites produced by pulse plasma compaction

    NASA Astrophysics Data System (ADS)

    Dias, M.; Catarino, N.; Nunes, D.; Fortunato, E.; Nogueira, I.; Rosinki, M.; Correia, J. B.; Carvalho, P. A.; Alves, E.

    2017-08-01

    Tungsten-tantalum composites have been envisaged for first-wall components of nuclear fusion reactors; however, changes in their microstructure are expected from severe irradiation with helium and hydrogenic plasma species. In this study, composites were produced from ball milled W powder mixed with 10 at.% Ta fibers through consolidation by pulse plasma compaction. Implantation was carried out at room temperature with He+ (30 keV) or D+ (15 keV) or sequentially with He+ and D+ using ion beams with fluences of 5 × 1021 at/m2. Microstructural changes and deuterium retention in the implanted composites were investigated by scanning electron microscopy, coupled with focused ion beam and energy dispersive X-ray spectroscopy, transmission electron microscopy, X-ray diffraction, Rutherford backscattering spectrometry and nuclear reaction analysis. The composite materials consisted of Ta fibers dispersed in a nanostructured W matrix, with Ta2O5 layers at the interfacial regions. The Ta and Ta2O5 surfaces exhibited blisters after He+ implantation and subsequent D+ implantation worsened the blistering behavior of Ta2O5. Swelling was also pronounced in Ta2O5 where large blisters exhibited an internal nanometer-sized fuzz structure. Transmission electron microscopy revealed an extensive presence of dislocations in the metallic phases after the sequential implantation, while a relatively low density of defects was detected in Ta2O5. This behavior may be partially justified by a shielding effect from the blisters and fuzz structure developed progressively during implantation. The tungsten peaks in the X-ray diffractograms were markedly shifted after He+ implantation, and even more so after the sequential implantation, which is in agreement with the increased D retention inferred from nuclear reaction analysis.

  17. Optimization of freeform lightpipes for light-emitting-diode projectors.

    PubMed

    Fournier, Florian; Rolland, Jannick

    2008-03-01

    Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.

  18. Optimization of freeform lightpipes for light-emitting-diode projectors

    NASA Astrophysics Data System (ADS)

    Fournier, Florian; Rolland, Jannick

    2008-03-01

    Standard nonimaging components used to collect and integrate light in light-emitting-diode-based projector light engines such as tapered rods and compound parabolic concentrators are compared to optimized freeform shapes in terms of transmission efficiency and spatial uniformity. We show that the simultaneous optimization of the output surface and the profile shape yields transmission efficiency within the étendue limit up to 90% and spatial uniformity higher than 95%, even for compact sizes. The optimization process involves a manual study of the trends for different shapes and the use of an optimization algorithm to further improve the performance of the freeform lightpipe.

  19. Sequentially Integrated Optimization of the Conditions to Obtain a High-Protein and Low-Antinutritional Factors Protein Isolate from Edible Jatropha curcas Seed Cake.

    PubMed

    León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto

    2013-01-01

    Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g(-1) of total phenolics, 2891.84 mg 100 g(-1) of phytates and 168 mg 100 g(-1) of saponins. The protein content of the this isolate was higher than the content reported by other authors.

  20. Sequentially Integrated Optimization of the Conditions to Obtain a High-Protein and Low-Antinutritional Factors Protein Isolate from Edible Jatropha curcas Seed Cake

    PubMed Central

    León-López, Liliana; Dávila-Ortiz, Gloria; Jiménez-Martínez, Cristian; Hernández-Sánchez, Humberto

    2013-01-01

    Jatropha curcas seed cake is a protein-rich byproduct of oil extraction which could be used to produce protein isolates. The purpose of this study was the optimization of the protein isolation process from the seed cake of an edible provenance of J. curcas by an alkaline extraction followed by isoelectric precipitation method via a sequentially integrated optimization approach. The influence of four different factors (solubilization pH, extraction temperature, NaCl addition, and precipitation pH) on the protein and antinutritional compounds content of the isolate was evaluated. The estimated optimal conditions were an extraction temperature of 20°C, a precipitation pH of 4, and an amount of NaCl in the extraction solution of 0.6 M for a predicted protein content of 93.3%. Under these conditions, it was possible to obtain experimentally a protein isolate with 93.21% of proteins, 316.5 mg 100 g−1 of total phenolics, 2891.84 mg 100 g−1 of phytates and 168 mg 100 g−1 of saponins. The protein content of the this isolate was higher than the content reported by other authors. PMID:25937971

  1. Removal of arsenic and cadmium with sequential soil washing techniques using Na2EDTA, oxalic and phosphoric acid: Optimization conditions, removal effectiveness and ecological risks.

    PubMed

    Wei, Meng; Chen, Jiajun; Wang, Xingwei

    2016-08-01

    Testing of sequential soil washing in triplicate using typical chelating agent (Na2EDTA), organic acid (oxalic acid) and inorganic weak acid (phosphoric acid) was conducted to remediate soil contaminated by heavy metals close to a mining area. The aim of the testing was to improve removal efficiency and reduce mobility of heavy metals. The sequential extraction procedure and further speciation analysis of heavy metals demonstrated that the primary components of arsenic and cadmium in the soil were residual As (O-As) and exchangeable fraction, which accounted for 60% and 70% of total arsenic and cadmium, respectively. It was determined that soil washing agents and their washing order were critical to removal efficiencies of metal fractions, metal bioavailability and potential mobility due to different levels of dissolution of residual fractions and inter-transformation of metal fractions. The optimal soil washing option for arsenic and cadmium was identified as phosphoric-oxalic acid-Na2EDTA sequence (POE) based on the high removal efficiency (41.9% for arsenic and 89.6% for cadmium) and the minimal harmful effects of the mobility and bioavailability of the remaining heavy metals. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Determination of dasatinib in the tablet dosage form by ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis.

    PubMed

    Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel

    2017-01-01

    Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. OPTIMIZING NIST SEQUENTIAL EXTRACTION METHOD FOR LAKE SEDIMENT (SRM4354)

    EPA Science Inventory

    Traditionally, measurements of radionuclides in the environment have focused on the determination of total concentration. It is clear, however, that total concentration does not describe the bioavailability of contaminating radionuclides. The environmental behavior depends on spe...

  4. Ion transfer from an atmospheric pressure ion funnel into a mass spectrometer with different interface options: Simulation-based optimization of ion transmission efficiency.

    PubMed

    Mayer, Thomas; Borsdorf, Helko

    2016-02-15

    We optimized an atmospheric pressure ion funnel (APIF) including different interface options (pinhole, capillary, and nozzle) regarding a maximal ion transmission. Previous computer simulations consider the ion funnel itself and do not include the geometry of the following components which can considerably influence the ion transmission into the vacuum stage. Initially, a three-dimensional computer-aided design (CAD) model of our setup was created using Autodesk Inventor. This model was imported to the Autodesk Simulation CFD program where the computational fluid dynamics (CFD) were calculated. The flow field was transferred to SIMION 8.1. Investigations of ion trajectories were carried out using the SDS (statistical diffusion simulation) tool of SIMION, which allowed us to evaluate the flow regime, pressure, and temperature values that we obtained. The simulation-based optimization of different interfaces between an atmospheric pressure ion funnel and the first vacuum stage of a mass spectrometer require the consideration of fluid dynamics. The use of a Venturi nozzle ensures the highest level of transmission efficiency in comparison to capillaries or pinholes. However, the application of radiofrequency (RF) voltage and an appropriate direct current (DC) field leads to process optimization and maximum ion transfer. The nozzle does not hinder the transfer of small ions. Our high-resolution SIMION model (0.01 mm grid unit(-1) ) under consideration of fluid dynamics is generally suitable for predicting the ion transmission through an atmospheric-vacuum system for mass spectrometry and enables the optimization of operational parameters. A Venturi nozzle inserted between the ion funnel and the mass spectrometer permits maximal ion transmission. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Spatiotemporally and Sequentially-Controlled Drug Release from Polymer Gatekeeper-Hollow Silica Nanoparticles

    NASA Astrophysics Data System (ADS)

    Palanikumar, L.; Jeena, M. T.; Kim, Kibeom; Yong Oh, Jun; Kim, Chaekyu; Park, Myoung-Hwan; Ryu, Ja-Hyoung

    2017-04-01

    Combination chemotherapy has become the primary strategy against cancer multidrug resistance; however, accomplishing optimal pharmacokinetic delivery of multiple drugs is still challenging. Herein, we report a sequential combination drug delivery strategy exploiting a pH-triggerable and redox switch to release cargos from hollow silica nanoparticles in a spatiotemporal manner. This versatile system further enables a large loading efficiency for both hydrophobic and hydrophilic drugs inside the nanoparticles, followed by self-crosslinking with disulfide and diisopropylamine-functionalized polymers. In acidic tumour environments, the positive charge generated by the protonation of the diisopropylamine moiety facilitated the cellular uptake of the particles. Upon internalization, the acidic endosomal pH condition and intracellular glutathione regulated the sequential release of the drugs in a time-dependent manner, providing a promising therapeutic approach to overcoming drug resistance during cancer treatment.

  6. Sequential state discrimination and requirement of quantum dissonance

    NASA Astrophysics Data System (ADS)

    Pang, Chao-Qian; Zhang, Fu-Lin; Xu, Li-Fang; Liang, Mai-Lin; Chen, Jing-Ling

    2013-11-01

    We study the procedure for sequential unambiguous state discrimination. A qubit is prepared in one of two possible states and measured by two observers Bob and Charlie sequentially. A necessary condition for the state to be unambiguously discriminated by Charlie is the absence of entanglement between the principal qubit, prepared by Alice, and Bob's auxiliary system. In general, the procedure for both Bob and Charlie to recognize between two nonorthogonal states conclusively relies on the availability of quantum discord which is precisely the quantum dissonance when the entanglement is absent. In Bob's measurement, the left discord is positively correlated with the information extracted by Bob, and the right discord enhances the information left to Charlie. When their product achieves its maximum the probability for both Bob and Charlie to identify the state achieves its optimal value.

  7. Light emitting ceramic device

    DOEpatents

    Valentine, Paul; Edwards, Doreen D.; Walker, Jr., William John; Slack, Lyle H.; Brown, Wayne Douglas; Osborne, Cathy; Norton, Michael; Begley, Richard

    2010-05-18

    A light-emitting ceramic based panel, hereafter termed "electroceramescent" panel, is herein claimed. The electroceramescent panel is formed on a substrate providing mechanical support as well as serving as the base electrode for the device. One or more semiconductive ceramic layers directly overlay the substrate, and electrical conductivity and ionic diffusion are controlled. Light emitting regions overlay the semiconductive ceramic layers, and said regions consist sequentially of a layer of a ceramic insulation layer and an electroluminescent layer, comprised of doped phosphors or the equivalent. One or more conductive top electrode layers having optically transmissive areas overlay the light emitting regions, and a multi-layered top barrier cover comprising one or more optically transmissive non-combustible insulation layers overlay said top electrode regions.

  8. Simplex-method based transmission performance optimization for 100G PDM-QPSK systems with non-identical spans

    NASA Astrophysics Data System (ADS)

    Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi

    2015-06-01

    A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.

  9. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Adam H.; Margot, Jean-Luc

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  10. Preliminary Analysis of Optimal Round Trip Lunar Missions

    NASA Astrophysics Data System (ADS)

    Gagg Filho, L. A.; da Silva Fernandes, S.

    2015-10-01

    A study of optimal bi-impulsive trajectories of round trip lunar missions is presented in this paper. The optimization criterion is the total velocity increment. The dynamical model utilized to describe the motion of the space vehicle is a full lunar patched-conic approximation, which embraces the lunar patched-conic of the outgoing trip and the lunar patched-conic of the return mission. Each one of these parts is considered separately to solve an optimization problem of two degrees of freedom. The Sequential Gradient Restoration Algorithm (SGRA) is employed to achieve the optimal solutions, which show a good agreement with the ones provided by literature, and, proved to be consistent with the image trajectories theorem.

  11. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  12. Three-dimensional textures and defects of soft material layering revealed by thermal sublimation.

    PubMed

    Yoon, Dong Ki; Kim, Yun Ho; Kim, Dae Seok; Oh, Seong Dae; Smalyukh, Ivan I; Clark, Noel A; Jung, Hee-Tae

    2013-11-26

    Layering is found and exploited in a variety of soft material systems, ranging from complex macromolecular self-assemblies to block copolymer and small-molecule liquid crystals. Because the control of layer structure is required for applications and characterization, and because defects reveal key features of the symmetries of layered phases, a variety of techniques have been developed for the study of soft-layer structure and defects, including X-ray diffraction and visualization using optical transmission and fluorescence confocal polarizing microscopy, atomic force microscopy, and SEM and transmission electron microscopy, including freeze-fracture transmission electron microscopy. Here, it is shown that thermal sublimation can be usefully combined with such techniques to enable visualization of the 3D structure of soft materials. Sequential sublimation removes material in a stepwise fashion, leaving a remnant layer structure largely unchanged and viewable using SEM, as demonstrated here using a lamellar smectic liquid crystal.

  13. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  14. Transmission models and management of lymphatic filariasis elimination.

    PubMed

    Michael, Edwin; Gambhir, Manoj

    2010-01-01

    The planning and evaluation of parasitic control programmes are complicated by the many interacting population dynamic and programmatic factors that determine infection trends under different control options. A key need is quantification about the status of the parasite system state at any one given timepoint and the dynamic change brought upon that state as an intervention program proceeds. Here, we focus on the control and elimination of the vector-borne disease, lymphatic filariasis, to show how mathematical models of parasite transmission can provide a quantitative framework for aiding the design of parasite elimination and monitoring programs by their ability to support (1) conducting rational analysis and definition of endpoints for different programmatic aims or objectives, including transmission endpoints for disease elimination, (2) undertaking strategic analysis to aid the optimal design of intervention programs to meet set endpoints under different endemic settings and (3) providing support for performing informed evaluations of ongoing programs, including aiding the formation of timely adaptive management strategies to correct for any observed deficiencies in program effectiveness. The results also highlight how the use of a model-based framework will be critical to addressing the impacts of ecological complexities, heterogeneities and uncertainties on effective parasite management and thereby guiding the development of strategies to resolve and overcome such real-world complexities. In particular, we underscore how this approach can provide a link between ecological science and policy by revealing novel tools and measures to appraise and enhance the biological controllability or eradicability of parasitic diseases. We conclude by emphasizing an urgent need to develop and apply flexible adaptive management frameworks informed by mathematical models that are based on learning and reducing uncertainty using monitoring data, apply phased or sequential decision-making to address extant uncertainty and focus on developing ecologically resilient management strategies, in ongoing efforts to control or eliminate filariasis and other parasitic diseases in resource-poor communities.

  15. First-Episode Psychosis and the Criminal Justice System: Using a Sequential Intercept Framework to Highlight Risks and Opportunities.

    PubMed

    Wasser, Tobias; Pollard, Jessica; Fisk, Deborah; Srihari, Vinod

    2017-10-01

    In first-episode psychosis there is a heightened risk of aggression and subsequent criminal justice involvement. This column reviews the evidence pointing to these heightened risks and highlights opportunities, using a sequential intercept model, for collaboration between mental health services and existing diversionary programs, particularly for patients whose behavior has already brought them to the attention of the criminal justice system. Coordinating efforts in these areas across criminal justice and clinical spheres can decrease the caseload burden on the criminal justice system and optimize clinical and legal outcomes for this population.

  16. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  17. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  18. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network

    PubMed Central

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979

  19. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    PubMed

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.

  20. Strategic disruption of nuclear pores structure, integrity and barrier for nuclear apoptosis.

    PubMed

    Shahin, Victor

    2017-08-01

    Apoptosis is a programmed cell death playing key roles in physiology and pathophysiology of multi cellular organisms. Its nuclear manifestation requires transmission of the death signals across the nuclear pore complexes (NPCs). In strategic sequential steps apoptotic factors disrupt NPCs structure, integrity and barrier ultimately leading to nuclear breakdown. The present review reflects on these steps. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Design and experimentally measure a high performance metamaterial filter

    NASA Astrophysics Data System (ADS)

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  2. The optimization of wireless power transmission: design and realization.

    PubMed

    Jia, Zhiwei; Yan, Guozheng; Liu, Hua; Wang, Zhiwu; Jiang, Pingping; Shi, Yu

    2012-09-01

    A wireless power transmission system is regarded as a practical way of solving power-shortage problems in multifunctional active capsule endoscopes. The uniformity of magnetic flux density, frequency stability and orientation stability are used to evaluate power transmission stability, taking into consideration size and safety constraints. Magnetic field safety and temperature rise are also considered. Test benches are designed to measure the relevent parameters. Finally, a mathematical programming model in which these constraints are considered is proposed to improve transmission efficiency. To verify the feasibility of the proposed method, various systems for a wireless active capsule endoscope are designed and evaluated. The optimal power transmission system has the capability to supply continuously at least 500 mW of power with a transmission efficiency of 4.08%. The example validates the feasibility of the proposed method. Introduction of novel designs enables further improvement of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Literature Survey on Operational Voltage Control and Reactive Power Management on Transmission and Sub-Transmission Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizondo, Marcelo A.; Samaan, Nader A.; Makarov, Yuri V.

    Voltage and reactive power system control is generally performed following usual patterns of loads, based on off-line studies for daily and seasonal operations. This practice is currently challenged by the inclusion of distributed renewable generation, such as solar. There has been focus on resolving this problem at the distribution level; however, the transmission and sub-transmission levels have received less attention. This paper provides a literature review of proposed methods and solution approaches to coordinate and optimize voltage control and reactive power management, with an emphasis on applications at transmission and sub-transmission level. The conclusion drawn from the survey is thatmore » additional research is needed in the areas of optimizing switch shunt actions and coordinating all available resources to deal with uncertain patterns from increasing distributed renewable generation in the operational time frame. These topics are not deeply explored in the literature.« less

  4. The optimal design of the bed structure of bedstand based on ABAQUS

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Dong, Yu; Ge, Qingkuan; Wang, Song

    2017-12-01

    Hydraulic transmission bedstand is one kind of the most commonly used in engineering machinery companies, and the bed structure is the most important part. Based on the original hydraulic transmission bedstand bed structure and the CAE technology, the original bed structure is improved. The optimized bed greatly saves the material of the production bed and improves the seismic performance of the bed. In the end, the performance of the optimized bed was compared with the original bed.

  5. Optimization for minimum sensitivity to uncertain parameters

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw

    1994-01-01

    A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.

  6. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  7. Analysis of filter tuning techniques for sequential orbit determination

    NASA Technical Reports Server (NTRS)

    Lee, T.; Yee, C.; Oza, D.

    1995-01-01

    This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.

  8. Coagulation-flocculation sequential with Fenton or Photo-Fenton processes as an alternative for the industrial textile wastewater treatment.

    PubMed

    GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel

    2017-04-15

    In this study, the industrial textile wastewater was treated using a chemical-based technique (coagulation-flocculation, C-F) sequential with an advanced oxidation process (AOP: Fenton or Photo-Fenton). During the C-F, Al 2 (SO 4 ) 3 was used as coagulant and its optimal dose was determined using the jar test. The following operational conditions of C-F, maximizing the organic matter removal, were determined: 700 mg/L of Al 2 (SO 4 ) 3  at pH = 9.96. Thus, the C-F allowed to remove 98% of turbidity, 48% of Chemical Oxygen Demand (COD), and let to increase in the BOD 5 /COD ratio from 0.137 to 0.212. Subsequently, the C-F effluent was treated using each of AOPs. Their performances were optimized by the Response Surface Methodology (RSM) coupled with a Box-Behnken experimental design (BBD). The following optimal conditions of both Fenton (Fe 2+ /H 2 O 2 ) and Photo-Fenton (Fe 2+ /H 2 O 2 /UV) processes were found: Fe 2+ concentration = 1 mM, H 2 O 2 dose = 2 mL/L (19.6 mM), and pH = 3. The combination of C-F pre-treatment with the Fenton reagent, at optimized conditions, let to remove 74% of COD during 90 min of the process. The C-F sequential with Photo-Fenton process let to reach 87% of COD removal, in the same time. Moreover, the BOD 5 /COD ratio increased from 0.212 to 0.68 and from 0.212 to 0.74 using Fenton and Photo-Fenton processes, respectively. Thus, the enhancement of biodegradability with the physico-chemical treatment was proved. The depletion of H 2 O 2 was monitored during kinetic study. Strategies for improving the reaction efficiency, based on the H 2 O 2 evolution, were also tested. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  10. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  11. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  12. Optimizing signal recycling for detecting a stochastic gravitational-wave background

    NASA Astrophysics Data System (ADS)

    Tao, Duo; Christensen, Nelson

    2018-06-01

    Signal recycling is applied in laser interferometers such as the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) to increase their sensitivity to gravitational waves. In this study, signal recycling configurations for detecting a stochastic gravitational wave background are optimized based on aLIGO parameters. Optimal transmission of the signal recycling mirror (SRM) and detuning phase of the signal recycling cavity under a fixed laser power and low-frequency cutoff are calculated. Based on the optimal configurations, the compatibility with a binary neutron star (BNS) search is discussed. Then, different laser powers and low-frequency cutoffs are considered. Two models for the dimensionless energy density of gravitational waves , the flat model and the model, are studied. For a stochastic background search, it is found that an interferometer using signal recycling has a better sensitivity than an interferometer not using it. The optimal stochastic search configurations are typically found when both the SRM transmission and the signal recycling detuning phase are low. In this region, the BNS range mostly lies between 160 and 180 Mpc. When a lower laser power is used the optimal signal recycling detuning phase increases, the optimal SRM transmission increases and the optimal sensitivity improves. A reduced low-frequency cutoff gives a better sensitivity limit. For both models of , a typical optimal sensitivity limit on the order of 10‑10 is achieved at a reference frequency of Hz.

  13. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  14. Application of Sequential Quadratic Programming to Minimize Smart Active Flap Rotor Hub Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Leyland, Jane

    2014-01-01

    In an analytical study, SMART active flap rotor hub loads have been minimized using nonlinear programming constrained optimization methodology. The recently developed NLPQLP system (Schittkowski, 2010) that employs Sequential Quadratic Programming (SQP) as its core algorithm was embedded into a driver code (NLP10x10) specifically designed to minimize active flap rotor hub loads (Leyland, 2014). Three types of practical constraints on the flap deflections have been considered. To validate the current application, two other optimization methods have been used: i) the standard, linear unconstrained method, and ii) the nonlinear Generalized Reduced Gradient (GRG) method with constraints. The new software code NLP10x10 has been systematically checked out. It has been verified that NLP10x10 is functioning as desired. The following are briefly covered in this paper: relevant optimization theory; implementation of the capability of minimizing a metric of all, or a subset, of the hub loads as well as the capability of using all, or a subset, of the flap harmonics; and finally, solutions for the SMART rotor. The eventual goal is to implement NLP10x10 in a real-time wind tunnel environment.

  15. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: Results of the BRANCH trial.

    PubMed

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-10-06

    We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.

  16. SimWIND: A Geospatial Infrastructure Model for Wind Energy Production and Transmission

    NASA Astrophysics Data System (ADS)

    Middleton, R. S.; Phillips, B. R.; Bielicki, J. M.

    2009-12-01

    Wind is a clean, enduring energy resource with a capacity to satisfy 20% or more of the electricity needs in the United States. A chief obstacle to realizing this potential is the general paucity of electrical transmission lines between promising wind resources and primary load centers. Successful exploitation of this resource will therefore require carefully planned enhancements to the electric grid. To this end, we present the model SimWIND for self-consistent optimization of the geospatial arrangement and cost of wind energy production and transmission infrastructure. Given a set of wind farm sites that satisfy meteorological viability and stakeholder interest, our model simultaneously determines where and how much electricity to produce, where to build new transmission infrastructure and with what capacity, and where to use existing infrastructure in order to minimize the cost for delivering a given amount of electricity to key markets. Costs and routing of transmission line construction take into account geographic and social factors, as well as connection and delivery expenses (transformers, substations, etc.). We apply our model to Texas and consider how findings complement the 2008 Electric Reliability Council of Texas (ERCOT) Competitive Renewable Energy Zones (CREZ) Transmission Optimization Study. Results suggest that integrated optimization of wind energy infrastructure and cost using SimWIND could play a critical role in wind energy planning efforts.

  17. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  18. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  19. Optical performance of random anti-reflection structured surfaces (rARSS) on spherical lenses

    NASA Astrophysics Data System (ADS)

    Taylor, Courtney D.

    Random anti-reflection structured surfaces (rARSS) have been reported to improve transmittance of optical-grade fused silica planar substrates to values greater than 99%. These textures are fabricated directly on the substrates using reactive-ion/inductively-coupled plasma etching (RIE/ICP) techniques, and often result in transmitted spectra with no measurable interference effects (fringes) for a wide range of wavelengths. The RIE/ICP processes used in the fabrication process to etch the rARSS is anisotropic and thus well suited for planar components. The improvement in spectral transmission has been found to be independent of optical incidence angles for values from 0° to +/-30°. Qualifying and quantifying the rARSS performance on curved substrates, such as convex lenses, is required to optimize the fabrication of the desired AR effect on optical-power elements. In this work, rARSS was fabricated on fused silica plano-convex (PCX) and plano-concave (PCV) lenses using a planar-substrate optimized RIE process to maximize optical transmission in the range from 500 to 1100 nm. An additional set of lenses were etched in a non-optimized ICP process to provide additional comparisons. Results are presented from optical transmission and beam propagation tests (optimized lenses only) of rARSS lenses for both TE and TM incident polarizations at a wavelength of 633 nm and over a 70° full field of view in both singlet and doublet configurations. These results suggest optimization of the fabrication process is not required, mainly due to the wide angle-of-incidence AR tolerance performance of the rARSS lenses. Non-optimized recipe lenses showed low transmission enhancement, and confirmed the need to optimized etch recipes prior to process transfer of PCX/PCV lenses. Beam propagation tests indicated no major beam degradation through the optimized lens elements. Scanning electron microscopy (SEM) images confirmed different structure between optimized and non-optimized samples. SEM images also indicated isotropically-oriented surface structures on both types of lenses.

  20. Scalable Heuristics for Planning, Placement and Sizing of Flexible AC Transmission System Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladmir; Backhaus, Scott N.; Chertkov, Michael

    Aiming to relieve transmission grid congestion and improve or extend feasibility domain of the operations, we build optimization heuristics, generalizing standard AC Optimal Power Flow (OPF), for placement and sizing of Flexible Alternating Current Transmission System (FACTS) devices of the Series Compensation (SC) and Static VAR Compensation (SVC) type. One use of these devices is in resolving the case when the AC OPF solution does not exist because of congestion. Another application is developing a long-term investment strategy for placement and sizing of the SC and SVC devices to reduce operational cost and improve power system operation. SC and SVCmore » devices are represented by modification of the transmission line inductances and reactive power nodal corrections respectively. We find one placement and sizing of FACTs devices for multiple scenarios and optimal settings for each scenario simultaneously. Our solution of the nonlinear and nonconvex generalized AC-OPF consists of building a convergent sequence of convex optimizations containing only linear constraints and shows good computational scaling to larger systems. The approach is illustrated on single- and multi-scenario examples of the Matpower case-30 model.« less

  1. The cost and cost-effectiveness of rapid testing strategies for yaws diagnosis and surveillance.

    PubMed

    Fitzpatrick, Christopher; Asiedu, Kingsley; Sands, Anita; Gonzalez Pena, Tita; Marks, Michael; Mitja, Oriol; Meheus, Filip; Van der Stuyft, Patrick

    2017-10-01

    Yaws is a non-venereal treponemal infection caused by Treponema pallidum subspecies pertenue. The disease is targeted by WHO for eradication by 2020. Rapid diagnostic tests (RDTs) are envisaged for confirmation of clinical cases during treatment campaigns and for certification of the interruption of transmission. Yaws testing requires both treponemal (trep) and non-treponemal (non-trep) assays for diagnosis of current infection. We evaluate a sequential testing strategy (using a treponemal RDT before a trep/non-trep RDT) in terms of cost and cost-effectiveness, relative to a single-assay combined testing strategy (using the trep/non-trep RDT alone), for two use cases: individual diagnosis and community surveillance. We use cohort decision analysis to examine the diagnostic and cost outcomes. We estimate cost and cost-effectiveness of the alternative testing strategies at different levels of prevalence of past/current infection and current infection under each use case. We take the perspective of the global yaws eradication programme. We calculate the total number of correct diagnoses for each strategy over a range of plausible prevalences. We employ probabilistic sensitivity analysis (PSA) to account for uncertainty and report 95% intervals. At current prices of the treponemal and trep/non-trep RDTs, the sequential strategy is cost-saving for individual diagnosis at prevalence of past/current infection less than 85% (81-90); it is cost-saving for surveillance at less than 100%. The threshold price of the trep/non-trep RDT (below which the sequential strategy would no longer be cost-saving) is US$ 1.08 (1.02-1.14) for individual diagnosis at high prevalence of past/current infection (51%) and US$ 0.54 (0.52-0.56) for community surveillance at low prevalence (15%). We find that the sequential strategy is cost-saving for both diagnosis and surveillance in most relevant settings. In the absence of evidence assessing relative performance (sensitivity and specificity), cost-effectiveness is uncertain. However, the conditions under which the combined test only strategy might be more cost-effective than the sequential strategy are limited. A cheaper trep/non-trep RDT is needed, costing no more than US$ 0.50-1.00, depending on the use case. Our results will help enhance the cost-effectiveness of yaws programmes in the 13 countries known to be currently endemic. It will also inform efforts in the much larger group of 71 countries with a history of yaws, many of which will have to undertake surveillance to confirm the interruption of transmission.

  2. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  3. Layer-switching cost and optimality in information spreading on multiplex networks

    PubMed Central

    Min, Byungjoon; Gwak, Sang-Hwan; Lee, Nanoom; Goh, K. -I.

    2016-01-01

    We study a model of information spreading on multiplex networks, in which agents interact through multiple interaction channels (layers), say online vs. offline communication layers, subject to layer-switching cost for transmissions across different interaction layers. The model is characterized by the layer-wise path-dependent transmissibility over a contact, that is dynamically determined dependently on both incoming and outgoing transmission layers. We formulate an analytical framework to deal with such path-dependent transmissibility and demonstrate the nontrivial interplay between the multiplexity and spreading dynamics, including optimality. It is shown that the epidemic threshold and prevalence respond to the layer-switching cost non-monotonically and that the optimal conditions can change in abrupt non-analytic ways, depending also on the densities of network layers and the type of seed infections. Our results elucidate the essential role of multiplexity that its explicit consideration should be crucial for realistic modeling and prediction of spreading phenomena on multiplex social networks in an era of ever-diversifying social interaction layers. PMID:26887527

  4. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints.

    PubMed

    Kostal, Lubomir; Kobayashi, Ryota

    2015-10-01

    Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Multidisciplinary optimization for engineering systems - Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  6. Multidisciplinary optimization for engineering systems: Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  7. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  8. Continuous performance measurement in flight systems. [sequential control model

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.

    1975-01-01

    The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.

  9. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  10. ANAEROBIC/AEROBIC BIODEGRADATION OF PENTACHLOROPHENOL USING GAC FLUIDIXED BED REACTORS: OPTIMIZATION OF THE EMPTY BED CONTACT TIME

    EPA Science Inventory

    An integrated reactor system has been developed to remediate pentachlorophenol (PCP) containing wastes using sequential anaerobic and aerobic biodegradation. Anaerobically, PCP was degraded to approximately equimolar concentrations (>99%) of chlorophenol (CP) in a granular activa...

  11. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  12. Sequential lignin depolymerization by combination of biocatalytic and formic acid/formate treatment steps.

    PubMed

    Gasser, Christoph A; Čvančarová, Monika; Ammann, Erik M; Schäffer, Andreas; Shahgaldian, Patrick; Corvini, Philippe F-X

    2017-03-01

    Lignin, a complex three-dimensional amorphous polymer, is considered to be a potential natural renewable resource for the production of low-molecular-weight aromatic compounds. In the present study, a novel sequential lignin treatment method consisting of a biocatalytic oxidation step followed by a formic acid-induced lignin depolymerization step was developed and optimized using response surface methodology. The biocatalytic step employed a laccase mediator system using the redox mediator 1-hydroxybenzotriazole. Laccases were immobilized on superparamagnetic nanoparticles using a sorption-assisted surface conjugation method allowing easy separation and reuse of the biocatalysts after treatment. Under optimized conditions, as much as 45 wt% of lignin could be solubilized either in aqueous solution after the first treatment or in ethyl acetate after the second (chemical) treatment. The solubilized products were found to be mainly low-molecular-weight aromatic monomers and oligomers. The process might be used for the production of low-molecular-weight soluble aromatic products that can be purified and/or upgraded applying further downstream processes.

  13. Spectrophotometric determination of sulphate in automotive fuel ethanol by sequential injection analysis using dimethylsulphonazo(III) reaction.

    PubMed

    de Oliveira, Fabio Santos; Korn, Mauro

    2006-01-15

    A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.

  14. Application of identifying transmission spheres for spherical surface testing

    NASA Astrophysics Data System (ADS)

    Han, Christopher B.; Ye, Xin; Li, Xueyuan; Wang, Quanzhao; Tang, Shouhong; Han, Sen

    2017-06-01

    We developed a new application on Microsoft Foundation Classes (MFC) to identify correct transmission spheres (TS) for Spherical Surface Testing (SST). Spherical surfaces are important optical surfaces, and the wide application and high production rate of spherical surfaces necessitates an accurate and highly reliable measuring device. A Fizeau Interferometer is an appropriate tool for SST due to its subnanometer accuracy. It measures the contour of a spherical surface using a common path, which is insensitive to the surrounding circumstances. The Fizeau Interferometer transmits a wide laser beam, creating interference fringes from re-converging light from the transmission sphere and the test surface. To make a successful measurement, the application calculates and determines the appropriate transmission sphere for the test surface. There are 3 main inputs from the test surfaces that are utilized to determine the optimal sizes and F-numbers of the transmission spheres: (1) the curvatures (concave or convex), (2) the Radii of Curvature (ROC), and (3) the aperture sizes. The application will firstly calculate the F-numbers (i.e. ROC divided by aperture) of the test surface, secondly determine the correct aperture size of a convex surface, thirdly verify that the ROC of the test surface must be shorter than the reference surface's ROC of the transmission sphere, and lastly calculate the percentage of area that the test surface will be measured. However, the amount of interferometers and transmission spheres should be optimized when measuring large spherical surfaces to avoid requiring a large amount of interferometers and transmission spheres for each test surface. Current measuring practices involve tedious and potentially inaccurate calculations. This smart application eliminates human calculation errors, optimizes the selection of transmission spheres (including the least number required) and interferometer sizes, and increases efficiency.

  15. The role of influenza in the severity and transmission of respiratory bacterial disease.

    PubMed

    Mina, Michael J; Klugman, Keith P

    2014-09-01

    Infections with influenza viruses and respiratory bacteria each contribute substantially to the global burden of morbidity and mortality. Simultaneous or sequential infection with these pathogens manifests in complex and difficult-to-treat disease processes that need extensive antimicrobial therapy and cause substantial excess mortality, particularly during annual influenza seasons and pandemics. At the host level, influenza viruses prime respiratory mucosal surfaces for excess bacterial acquisition and this supports increased carriage density and dissemination to the lower respiratory tract, while greatly constraining innate and adaptive antibacterial defences. Driven by virus-mediated structural modifications, aberrant immunological responses to sequential infection, and excessive immunopathological responses, co-infections are noted by short-term and long-term departures from immune homoeostasis, inhibition of appropriate pathogen recognition, loss of tolerance to tissue damage, and general increases in susceptibility to severe bacterial disease. At the population level, these effects translate into increased horizontal bacterial transmission and excess use of antimicrobial therapies. With increasing concerns about future possible influenza pandemics, the past decade has seen rapid advances in our understanding of these interactions. In this Review, we discuss the epidemiological and clinical importance of influenza and respiratory bacterial co-infections, including the foundational efforts that laid the groundwork for today's investigations, and detail the most important and current advances in our understanding of the structural and immunological mechanisms underlying the pathogenesis of co-infection. We describe and interpret what is known in sequence, from transmission and phenotypic shifts in bacterial dynamics to the immunological, cellular, and molecular modifications that underlie these processes, and propose avenues of further research that might be most valuable for prevention and treatment strategies to best mitigate excess disease during future influenza pandemics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. The role of influenza in the severity and transmission of respiratory bacterial disease

    PubMed Central

    Mina, Michael J; Klugman, Keith P

    2016-01-01

    Infections with influenza viruses and respiratory bacteria each contribute substantially to the global burden of morbidity and mortality. Simultaneous or sequential infection with these pathogens manifests in complex and difficult-to-treat disease processes that need extensive antimicrobial therapy and cause substantial excess mortality, particularly during annual influenza seasons and pandemics. At the host level, influenza viruses prime respiratory mucosal surfaces for excess bacterial acquisition and this supports increased carriage density and dissemination to the lower respiratory tract, while greatly constraining innate and adaptive antibacterial defences. Driven by virus-mediated structural modifications, aberrant immunological responses to sequential infection, and excessive immunopathological responses, co-infections are noted by short-term and long-term departures from immune homoeostasis, inhibition of appropriate pathogen recognition, loss of tolerance to tissue damage, and general increases in susceptibility to severe bacterial disease. At the population level, these effects translate into increased horizontal bacterial transmission and excess use of antimicrobial therapies. With increasing concerns about future possible influenza pandemics, the past decade has seen rapid advances in our understanding of these interactions. In this Review, we discuss the epidemiological and clinical importance of influenza and respiratory bacterial co-infections, including the foundational efforts that laid the groundwork for today’s investigations, and detail the most important and current advances in our understanding of the structural and immunological mechanisms underlying the pathogenesis of co-infection. We describe and interpret what is known in sequence, from transmission and phenotypic shifts in bacterial dynamics to the immunological, cellular, and molecular modifications that underlie these processes, and propose avenues of further research that might be most valuable for prevention and treatment strategies to best mitigate excess disease during future influenza pandemics. PMID:25131494

  17. The effects of multiple infections on the expression and evolution of virulence in a Daphnia-endoparasite system.

    PubMed

    Ben-Ami, Frida; Mouton, Laurence; Ebert, Dieter

    2008-07-01

    Multiple infections of a host by different strains of the same microparasite are common in nature. Although numerous models have been developed in an attempt to predict the evolutionary effects of intrahost competition, tests of the assumptions of these models are rare and the outcome is diverse. In the present study we examined the outcome of mixed-isolate infections in individual hosts, using a single clone of the waterflea Daphnia magna and three isolates of its semelparous endoparasite Pasteuria ramosa. We exposed individual Daphnia to single- and mixed-isolate infection treatments, both simultaneously and sequentially. Virulence was assessed by monitoring host mortality and fecundity, and parasite spore production was used as a measure of parasite fitness. Consistent with most assumptions, in multiply infected hosts we found that the virulence of mixed infections resembled that of the more virulent competitor, both in simultaneous multiple infections and in sequential multiple infections in which the virulent isolate was first to infect. The more virulent competitor also produced the vast majority of transmission stages. Only when the less virulent isolate was first to infect, the intrahost contest resembled scramble competition, whereby both isolates suffered by producing fewer transmission stages. Surprisingly, mixed-isolate infections resulted in lower fecundity-costs for the hosts, suggesting that parasite competition comes with an advantage for the host relative to single infections. Finally, spore production correlated positively with time-to-host-death. Thus, early-killing of more competitive isolates produces less transmission stages than less virulent, inferior isolates. Our results are consistent with the idea that less virulent parasite lines may be replaced by more virulent strains under conditions with high rates of multiple infections.

  18. Empty tracks optimization based on Z-Map model

    NASA Astrophysics Data System (ADS)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  19. Split-personality transmission: shifts like an automatic, saves fuel like a manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, D.

    1981-11-01

    The design, operation and performance of a British-invented automatic transmission which claims to result in fuel economy valves equal to those attained with manual shifts are described. Developed for both 4-speed and 6-speed transmissions, this transmission uses standard parts made for existing manual transmissions, rearranges the gear pairings, and relies on a microcomputer to pick the optimal shift points according to load requirements. (LCL)

  20. Optimizing the parameters of heat transmission in a small heat exchanger with spiral tapes cut as triangles and Aluminum oxide nanofluid using central composite design method

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-07-01

    The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.

  1. Optimizing the parameters of heat transmission in a small heat exchanger with spiral tapes cut as triangles and Aluminum oxide nanofluid using central composite design method

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-02-01

    The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.

  2. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  3. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  4. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE PAGES

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.; ...

    2017-04-07

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  5. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  6. The value of compressed air energy storage with wind in transmission-constrained electric power systems

    DOE PAGES

    Denholm, Paul; Sioshansi, Ramteen

    2009-05-05

    In this paper, we examine the potential advantages of co-locating wind and energy storage to increase transmission utilization and decrease transmission costs. Co-location of wind and storage decreases transmission requirements, but also decreases the economic value of energy storage compared to locating energy storage at the load. This represents a tradeoff which we examine to estimate the transmission costs required to justify moving storage from load-sited to wind-sited in three different locations in the United States. We examined compressed air energy storage (CAES) in three “wind by wire” scenarios with a variety of transmission and CAES sizes relative to amore » given amount of wind. In the sites and years evaluated, the optimal amount of transmission ranges from 60% to 100% of the wind farm rating, with the optimal amount of CAES equal to 0–35% of the wind farm rating, depending heavily on wind resource, value of electricity in the local market, and the cost of natural gas.« less

  7. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  8. Adaptive optics full-field OCT: a resolution almost insensitive to aberrations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xiao, Peng; Fink, Mathias; Boccara, A. Claude

    2016-03-01

    A Full-Field OCT (FFOCT) setup coupled to a compact transmissive liquid crystal spatial light modulator (LCSLM) is used to induce or correct aberrations and simulate eye examinations. To reduce the system complexity, strict pupil conjugation was abandoned. During our work on quantifying the effect of geometrical aberrations on FFOCT images, we found that the image resolution is almost insensitive to aberrations. Indeed if the object channel PSF is distorted, its interference with the reference channel conserves the main feature of an unperturbed PSF with only a reduction of the signal level. This unique behavior is specific to the use of a spatially incoherent illumination. Based on this, the FFOCT image intensity was used as the metric for our wavefront sensorless correction. Aberration correction was first conducted on an USAF resolution target with the LSCLM as both aberration generator and corrector. A random aberration mask was induced, and the low-order Zernike Modes were corrected sequentially according to the intensity metric function optimization. A Ficus leaf and a fixed mouse brain tissue slice were also imaged to demonstrate the correction of sample self-induced wavefront distortions. After optimization, more structured information appears for the leaf imaging. And the high-signal fiber-like myelin fiber structures were resolved much more clearly after the whole correction process for mouse brain imaging. Our experiment shows the potential of this compact AO-FFOCT system for aberration correction imaging. This preliminary approach that simulates eyes aberrations correction also opens the path to a simple implementation of FFOCT adaptive optics for retinal examinations.

  9. Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum

    USDA-ARS?s Scientific Manuscript database

    We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...

  10. University of Iowa at TREC 2008 Legal and Relevance Feedback Tracks

    DTIC Science & Technology

    2008-11-01

    Fellbaum, C, [ed.]. Wordnet: An Electronic Lexical Database. Cambridge : MIT Press, 1998. [3] Salton , G. (ed) (1971), The SMART Retrieval System...learning tools and techniques. 2nd Edition. San Francisco : Morgan Kaufmann, 2005. [5] Platt, J . Machines using Sequential Minimal Optimization. [ed.] B

  11. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    PubMed

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  12. Optimization Strategies for Bruch's Membrane Opening Minimum Rim Area Calculation: Sequential versus Simultaneous Minimization.

    PubMed

    Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M

    2017-10-24

    To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.

  13. Control of water distribution networks with dynamic DMA topology using strictly feasible sequential convex programming

    NASA Astrophysics Data System (ADS)

    Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan

    2015-12-01

    The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.

  14. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  15. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  16. A Holistic Approach to Networked Information Systems Design and Analysis

    DTIC Science & Technology

    2016-04-15

    attain quite substantial savings. 11. Optimal algorithms for energy harvesting in wireless networks. We use a Markov- decision-process (MDP) based...approach to obtain optimal policies for transmissions . The key advantage of our approach is that it holistically considers information and energy in a...Coding technique to minimize delays and the number of transmissions in Wireless Systems. As we approach an era of ubiquitous computing with information

  17. Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.

  18. System and method of vehicle operating condition management

    DOEpatents

    Sujan, Vivek A.; Vajapeyazula, Phani; Follen, Kenneth; Wu, An; Moffett, Barty L.

    2015-10-20

    A vehicle operating condition profile can be determined over a given route while also considering imposed constraints such as deviation from time targets, deviation from maximum governed speed limits, etc. Given current vehicle speed, engine state and transmission state, the present disclosure optimally manages the engine map and transmission to provide a recommended vehicle operating condition that optimizes fuel consumption in transitioning from one vehicle state to a target state. Exemplary embodiments provide for offline and online optimizations relative to fuel consumption. The benefit is increased freight efficiency in transporting cargo from source to destination by minimizing fuel consumption and maintaining drivability.

  19. Transmission loss optimization in acoustic sandwich panels

    NASA Astrophysics Data System (ADS)

    Makris, S. E.; Dym, C. L.; MacGregor Smith, J.

    1986-06-01

    Considering the sound transmission loss (TL) of a sandwich panel as the single objective, different optimization techniques are examined and a sophisticated computer program is used to find the optimum TL. Also, for one of the possible case studies such as core optimization, closed-form expressions are given between TL and the core-design variables for different sets of skins. The significance of these functional relationships lies in the fact that the panel designer can bypass the necessity of using a sophisticated software package in order to assess explicitly the dependence of the TL on core thickness and density.

  20. Adaptive Interventions and SMART Designs: Application to Child Behavior Research in a Community Setting

    ERIC Educational Resources Information Center

    Kidwell, Kelley M.; Hyde, Luke W.

    2016-01-01

    Heterogeneity between and within people necessitates the need for sequential personalized interventions to optimize individual outcomes. Personalized or adaptive interventions (AIs) are relevant for diseases and maladaptive behavioral trajectories when one intervention is not curative and success of a subsequent intervention may depend on…

  1. A Sequential Quadratic Programming Algorithm Using an Incomplete Solution of the Subproblem

    DTIC Science & Technology

    1990-09-01

    Electr6nica e Inform’itica Industrial E.T.S. Ingenieros Industriales Universidad Polit6cnica, Madrid Technical Report SOL 90-12 September 1990 -Y...MURRAY* AND FRANCISCO J. PRIETOt *Systems Optimization Laboratory Department of Operations Research Stanford University tDept. de Automitica, Ingenieria

  2. An integrated error estimation and lag-aware data assimilation scheme for real-time flood forecasting

    USDA-ARS?s Scientific Manuscript database

    The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...

  3. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  4. Selecting appropriate singular values of transmission matrix to improve precision of incident wavefront retrieval

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Du, Jinglei

    2018-06-01

    A method of selecting appropriate singular values of the transmission matrix to improve the precision of incident wavefront retrieval in focusing light through scattering media is proposed. The optimal singular values selected by this method can reduce the degree of ill-conditionedness of the transmission matrix effectively, which indicates that the incident wavefront retrieved from the optimal set of singular values is more accurate than the incident wavefront retrieved from other sets of singular values. The validity of this method is verified by numerical simulation and actual measurements of the incident wavefront of coherent light through ground glass.

  5. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  6. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  7. Effects of channel blocking on information transmission and energy efficiency in squid giant axons.

    PubMed

    Liu, Yujiang; Yue, Yuan; Yu, Yuguo; Liu, Liwei; Yu, Lianchun

    2018-04-01

    Action potentials are the information carriers of neural systems. The generation of action potentials involves the cooperative opening and closing of sodium and potassium channels. This process is metabolically expensive because the ions flowing through open channels need to be restored to maintain concentration gradients of these ions. Toxins like tetraethylammonium can block working ion channels, thus affecting the function and energy cost of neurons. In this paper, by computer simulation of the Hodgkin-Huxley neuron model, we studied the effects of channel blocking with toxins on the information transmission and energy efficiency in squid giant axons. We found that gradually blocking sodium channels will sequentially maximize the information transmission and energy efficiency of the axons, whereas moderate blocking of potassium channels will have little impact on the information transmission and will decrease the energy efficiency. Heavy blocking of potassium channels will cause self-sustained oscillation of membrane potentials. Simultaneously blocking sodium and potassium channels with the same ratio increases both information transmission and energy efficiency. Our results are in line with previous studies suggesting that information processing capacity and energy efficiency can be maximized by regulating the number of active ion channels, and this indicates a viable avenue for future experimentation.

  8. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  9. Optimal quantum observables

    NASA Astrophysics Data System (ADS)

    Haapasalo, Erkka; Pellonpää, Juha-Pekka

    2017-12-01

    Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.

  10. Monolithic high voltage nonlinear transmission line fabrication process

    DOEpatents

    Cooper, Gregory A.

    1994-01-01

    A process for fabricating sequential inductors and varactor diodes of a monolithic, high voltage, nonlinear, transmission line in GaAs is disclosed. An epitaxially grown laminate is produced by applying a low doped active n-type GaAs layer to an n-plus type GaAs substrate. A heavily doped p-type GaAs layer is applied to the active n-type layer and a heavily doped n-type GaAs layer is applied to the p-type layer. Ohmic contacts are applied to the heavily doped n-type layer where diodes are desired. Multiple layers are then either etched away or Oxygen ion implanted to isolate individual varactor diodes. An insulator is applied between the diodes and a conductive/inductive layer is thereafter applied on top of the insulator layer to complete the process.

  11. Light emitting ceramic device and method for fabricating the same

    DOEpatents

    Valentine, Paul; Edwards, Doreen D.; Walker Jr., William John; Slack, Lyle H.; Brown, Wayne Douglas; Osborne, Cathy; Norton, Michael; Begley, Richard

    2004-11-30

    A light-emitting ceramic based panel, hereafter termed "electroceramescent" panel, and alternative methods of fabrication for the same are claimed. The electroceramescent panel is formed on a substrate providing mechanical support as well as serving as the base electrode for the device. One or more semiconductive ceramic layers directly overlay the substrate, and electrical conductivity and ionic diffusion are controlled. Light emitting regions overlay the semiconductive ceramic layers, and said regions consist sequentially of a layer of a ceramic insulation layer and an electroluminescent layer, comprised of doped phosphors or the equivalent. One or more conductive top electrode layers having optically transmissive areas overlay the light emitting regions, and a multi-layered top barrier cover comprising one or more optically transmissive non-combustible insulation layers overlay said top electrode regions.

  12. Potential digitization/compression techniques for Shuttle video

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B. H.

    1978-01-01

    The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.

  13. Transmission Scheduling and Routing Algorithms for Delay Tolerant Networks

    NASA Technical Reports Server (NTRS)

    Dudukovich, Rachel; Raible, Daniel E.

    2016-01-01

    The challenges of data processing, transmission scheduling and routing within a space network present a multi-criteria optimization problem. Long delays, intermittent connectivity, asymmetric data rates and potentially high error rates make traditional networking approaches unsuitable. The delay tolerant networking architecture and protocols attempt to mitigate many of these issues, yet transmission scheduling is largely manually configured and routes are determined by a static contact routing graph. A high level of variability exists among the requirements and environmental characteristics of different missions, some of which may allow for the use of more opportunistic routing methods. In all cases, resource allocation and constraints must be balanced with the optimization of data throughput and quality of service. Much work has been done researching routing techniques for terrestrial-based challenged networks in an attempt to optimize contact opportunities and resource usage. This paper examines several popular methods to determine their potential applicability to space networks.

  14. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  15. Optimal design of studies of influenza transmission in households. I: case-ascertained studies.

    PubMed

    Klick, B; Leung, G M; Cowling, B J

    2012-01-01

    Case-ascertained household transmission studies, in which households including an 'index case' are recruited and followed up, are invaluable to understanding the epidemiology of influenza. We used a simulation approach parameterized with data from household transmission studies to evaluate alternative study designs. We compared studies that relied on self-reported illness in household contacts vs. studies that used home visits to collect swab specimens for virological confirmation of secondary infections, allowing for the trade-off between sample size vs. intensity of follow-up given a fixed budget. For studies estimating the secondary attack proportion, 2-3 follow-up visits with specimens collected from all members regardless of illness were optimal. However, for studies comparing secondary attack proportions between two or more groups, such as controlled intervention studies, designs with reactive home visits following illness reports in contacts were most powerful, while a design with one home visit optimally timed also performed well.

  16. Risk-aware multi-armed bandit problem with application to portfolio selection

    PubMed Central

    Huo, Xiaoguang

    2017-01-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return. PMID:29291122

  17. Risk-aware multi-armed bandit problem with application to portfolio selection.

    PubMed

    Huo, Xiaoguang; Fu, Feng

    2017-11-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.

  18. Sequential desorption energy of hydrogen from nickel clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deepika,; Kumar, Rakesh, E-mail: rakesh@iitrpr.ac.in; R, Kamal Raj.

    2015-06-24

    We report reversible Hydrogen adsorption on Nickel clusters, which act as a catalyst for solid state storage of Hydrogen on a substrate. First-principles technique is employed to investigate the maximum number of chemically adsorbed Hydrogen molecules on Nickel cluster. We observe a maximum of four Hydrogen molecules adsorbed per Nickel atom, but the average Hydrogen molecules adsorbed per Nickel atom decrease with cluster size. The dissociative chemisorption energy per Hydrogen molecule and sequential desorption energy per Hydrogen atom on Nickel cluster is found to decrease with number of adsorbed Hydrogen molecules, which on optimization may help in economical storage andmore » regeneration of Hydrogen as a clean energy carrier.« less

  19. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  20. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, W.; Prieto, F.J.

    1993-05-01

    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less

  1. Synthesis and microwave absorbing characteristics of functionally graded carbonyl iron/polyurethane composites

    NASA Astrophysics Data System (ADS)

    Yang, R. B.; Liang, W. F.; Wu, C. H.; Chen, C. C.

    2016-05-01

    Radar absorbing materials (RAMs) also known as microwave absorbers, which can absorb and dissipate incident electromagnetic wave, are widely used in the fields of radar-cross section reduction, electromagnetic interference (EMI) reduction and human health protection. In this study, the synthesis of functionally graded material (FGM) (CI/Polyurethane composites), which is fabricated with semi-sequentially varied composition along the thickness, is implemented with a genetic algorithm (GA) to optimize the microwave absorption efficiency and bandwidth of FGM. For impedance matching and broad-band design, the original 8-layered FGM was obtained by the GA method to calculate the thickness of each layer for a sequential stacking of FGM from 20, 30, 40, 50, 60, 65, 70 and 75 wt% of CI fillers. The reflection loss of the original 8-layered FGM below -10 dB can be obtained in the frequency range of 5.12˜18 GHz with a total thickness of 9.66 mm. Further optimization reduces the number of the layers and the stacking sequence of the optimized 4-layered FGM is 20, 30, 65, 75 wt% with thickness of 0.8, 1.6, 0.6 and 1.0 mm, respectively. The synthesis and measurement of the optimized 4-layered FGM with a thickness of 4 mm reveal a minimum reflection loss of -25.2 dB at 6.64 GHz and its bandwidth below - 10 dB is larger than 12.8 GHz.

  2. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  3. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  4. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  5. Sequential bearings-only-tracking initiation with particle filtering method.

    PubMed

    Liu, Bin; Hao, Chengpeng

    2013-01-01

    The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.

  6. Crystal Growth and Dissolution of Methylammonium Lead Iodide Perovskite in Sequential Deposition: Correlation between Morphology Evolution and Photovoltaic Performance.

    PubMed

    Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien

    2017-03-15

    Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.

  7. Identifying protein complexes in PPI network using non-cooperative sequential game.

    PubMed

    Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta

    2017-08-21

    Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.

  8. Saving lives: A meta-analysis of team training in healthcare.

    PubMed

    Hughes, Ashley M; Gregory, Megan E; Joseph, Dana L; Sonesh, Shirley C; Marlow, Shannon L; Lacerenza, Christina N; Benishek, Lauren E; King, Heidi B; Salas, Eduardo

    2016-09-01

    As the nature of work becomes more complex, teams have become necessary to ensure effective functioning within organizations. The healthcare industry is no exception. As such, the prevalence of training interventions designed to optimize teamwork in this industry has increased substantially over the last 10 years (Weaver, Dy, & Rosen, 2014). Using Kirkpatrick's (1956, 1996) training evaluation framework, we conducted a meta-analytic examination of healthcare team training to quantify its effectiveness and understand the conditions under which it is most successful. Results demonstrate that healthcare team training improves each of Kirkpatrick's criteria (reactions, learning, transfer, results; d = .37 to .89). Second, findings indicate that healthcare team training is largely robust to trainee composition, training strategy, and characteristics of the work environment, with the only exception being the reduced effectiveness of team training programs that involve feedback. As a tertiary goal, we proposed and found empirical support for a sequential model of healthcare team training where team training affects results via learning, which leads to transfer, which increases results. We find support for this sequential model in the healthcare industry (i.e., the current meta-analysis) and in training across all industries (i.e., using meta-analytic estimates from Arthur, Bennett, Edens, & Bell, 2003), suggesting the sequential benefits of training are not unique to medical teams. Ultimately, this meta-analysis supports the expanded use of team training and points toward recommendations for optimizing its effectiveness within healthcare settings. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Design and protocol of a randomized multiple behavior change trial: Make Better Choices 2 (MBC2).

    PubMed

    Pellegrini, Christine A; Steglitz, Jeremy; Johnston, Winter; Warnick, Jennifer; Adams, Tiara; McFadden, H G; Siddique, Juned; Hedeker, Donald; Spring, Bonnie

    2015-03-01

    Suboptimal diet and inactive lifestyle are among the most prevalent preventable causes of premature death. Interventions that target multiple behaviors are potentially efficient; however the optimal way to initiate and maintain multiple health behavior changes is unknown. The Make Better Choices 2 (MBC2) trial aims to examine whether sustained healthful diet and activity change are best achieved by targeting diet and activity behaviors simultaneously or sequentially. Study design approximately 250 inactive adults with poor quality diet will be randomized to 3 conditions examining the best way to prescribe healthy diet and activity change. The 3 intervention conditions prescribe: 1) an increase in fruit and vegetable consumption (F/V+), decrease in sedentary leisure screen time (Sed-), and increase in physical activity (PA+) simultaneously (Simultaneous); 2) F/V+ and Sed- first, and then sequentially add PA+ (Sequential); or 3) Stress Management Control that addresses stress, relaxation, and sleep. All participants will receive a smartphone application to self-monitor behaviors and regular coaching calls to help facilitate behavior change during the 9 month intervention. Healthy lifestyle change in fruit/vegetable and saturated fat intakes, sedentary leisure screen time, and physical activity will be assessed at 3, 6, and 9 months. MBC2 is a randomized m-Health intervention examining methods to maximize initiation and maintenance of multiple healthful behavior changes. Results from this trial will provide insight about an optimal technology supported approach to promote improvement in diet and physical activity. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Predicting optimal transmission investment in malaria parasites

    PubMed Central

    Greischar, Megan A.; Mideo, Nicole; Read, Andrew F.; Bjørnstad, Ottar N.

    2016-01-01

    In vertebrate hosts, malaria parasites face a tradeoff between replicating and the production of transmission stages that can be passed onto mosquitoes. This tradeoff is analogous to growth-reproduction tradeoffs in multicellular organisms. We use a mathematical model tailored to the life cycle and dynamics of malaria parasites to identify allocation strategies that maximize cumulative transmission potential to mosquitoes. We show that plastic strategies can substantially outperform fixed allocation because parasites can achieve greater fitness by investing in proliferation early and delaying the production of transmission stages. Parasites should further benefit from restraining transmission investment later in infection, because such a strategy can help maintain parasite numbers in the face of resource depletion. Early allocation decisions are predicted to have the greatest impact on parasite fitness. If the immune response saturates as parasite numbers increase, parasites should benefit from even longer delays prior to transmission investment. The presence of a competing strain selects for consistently lower levels of transmission investment and dramatically increased exploitation of the red blood cell resource. While we provide a detailed analysis of tradeoffs pertaining to malaria life history, our approach for identifying optimal plastic allocation strategies may be broadly applicable. PMID:27271841

  11. Transmission Index Research of Parallel Manipulators Based on Matrix Orthogonal Degree

    NASA Astrophysics Data System (ADS)

    Shao, Zhu-Feng; Mo, Jiao; Tang, Xiao-Qiang; Wang, Li-Ping

    2017-11-01

    Performance index is the standard of performance evaluation, and is the foundation of both performance analysis and optimal design for the parallel manipulator. Seeking the suitable kinematic indices is always an important and challenging issue for the parallel manipulator. So far, there are extensive studies in this field, but few existing indices can meet all the requirements, such as simple, intuitive, and universal. To solve this problem, the matrix orthogonal degree is adopted, and generalized transmission indices that can evaluate motion/force transmissibility of fully parallel manipulators are proposed. Transmission performance analysis of typical branches, end effectors, and parallel manipulators is given to illustrate proposed indices and analysis methodology. Simulation and analysis results reveal that proposed transmission indices possess significant advantages, such as normalized finite (ranging from 0 to 1), dimensionally homogeneous, frame-free, intuitive and easy to calculate. Besides, proposed indices well indicate the good transmission region and relativity to the singularity with better resolution than the traditional local conditioning index, and provide a novel tool for kinematic analysis and optimal design of fully parallel manipulators.

  12. Eco-friendly synthesis of gelatin-capped bimetallic Au-Ag nanoparticles for chemiluminescence detection of anticancer raloxifene hydrochloride.

    PubMed

    Alarfaj, Nawal A; El-Tohamy, Maha F

    2016-09-01

    This study described the utility of green analytical chemistry in the synthesis of gelatin-capped silver, gold and bimetallic gold-silver nanoparticles (NPs). The preparation of nanoparticles was based on the reaction of silver nitrate or chlorauric acid with a 1.0 wt% aqueous gelatin solution at 50°C. The gelatin-capped silver, gold and bimetallic NPs were characterized using transmission electron microscopy, UV-vis, X-ray diffraction and Fourier transform infrared spectroscopy, and were used to enhance a sensitive sequential injection chemiluminescence luminol-potassium ferricyanide system for determination of the anticancer drug raloxifene hydrochloride. The developed method is eco-friendly and sensitive for chemiluminescence detection of the selected drug in its bulk powder, pharmaceutical injections and biosamples. After optimizing the conditions, a linear relationship in the range of 1.0 × 10(-9) to 1.0 × 10(-1)  mol/L was obtained with a limit of detection of 5.0 × 10(-10)  mol/L and a limit of quantification of 1.0 × 10(-9)  mol/L. Statistical treatment and method validation were performed based on ICH guidelines. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Grouper: A Compact, Streamable Triangle Mesh Data Structure.

    PubMed

    Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek

    2013-05-08

    We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.

  14. Creating Ruddlesden-Popper phases by hybrid molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Haislmaier, Ryan C.; Stone, Greg; Alem, Nasim; Engel-Herbert, Roman

    2016-07-01

    The synthesis of a 50 unit cell thick n = 4 Srn+1TinO3n+1 (Sr5Ti4O13) Ruddlesden-Popper (RP) phase film is demonstrated by sequentially depositing SrO and TiO2 layers in an alternating fashion using hybrid molecular beam epitaxy (MBE), where Ti was supplied using titanium tetraisopropoxide (TTIP). A detailed calibration procedure is outlined for determining the shuttering times to deposit SrO and TiO2 layers with precise monolayer doses using in-situ reflection high energy electron diffraction (RHEED) as feedback. Using optimized Sr and TTIP shuttering times, a fully automated growth of the n = 4 RP phase was carried out over a period of >4.5 h. Very stable RHEED intensity oscillations were observed over the entire growth period. The structural characterization by X-ray diffraction and high resolution transmission electron microscopy revealed that a constant periodicity of four SrTiO3 perovskite unit cell blocks separating the double SrO rocksalt layer was maintained throughout the entire film thickness with a very little amount of planar faults oriented perpendicular to the growth front direction. These results illustrate that hybrid MBE is capable of layer-by-layer growth with atomic level precision and excellent flux stability.

  15. Improved Analytical Sensitivity of Lateral Flow Assay using Sponge for HBV Nucleic Acid Detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; Liu, Zhi; Li, XiuJun; Wen, Ting; Qu, ZhiGuo; Zhang, Sufeng; Mei, Qibing; Xu, Feng

    2017-05-02

    Hepatitis B virus (HBV) infection is a serious public health problem, which can be transmitted through various routes (e.g., blood donation) and cause hepatitis, liver cirrhosis and liver cancer. Hence, it is necessary to do diagnostic screening for high-risk HBV patients in these transmission routes. Nowadays, protein-based technologies have been used for HBV testing, which however involve the issues of large sample volume, antibody instability and poor specificity. Nucleic acid hybridization-based lateral flow assay (LFA) holds great potential to address these limitations due to its low-cost, rapid, and simple features, but the poor analytical sensitivity of LFA restricts its application. In this study, we developed a low-cost, simple and easy-to-use method to improve analytical sensitivity by integrating sponge shunt into LFA to decrease the fluid flow rate. The thickness, length and hydrophobicity of the sponge shunt were sequentially optimized, and achieved 10-fold signal enhancement in nucleic acid testing of HBV as compared to the unmodified LFA. The enhancement was further confirmed by using HBV clinical samples, where we achieved the detection limit of 10 3 copies/ml as compared to 10 4 copies/ml in unmodified LFA. The improved LFA holds great potential for diseases diagnostics, food safety control and environment monitoring at point-of-care.

  16. Autonomous sample switcher for Mössbauer spectroscopy

    NASA Astrophysics Data System (ADS)

    López, J. H.; Restrepo, J.; Barrero, C. A.; Tobón, J. E.; Ramírez, L. F.; Jaramillo, J.

    2017-11-01

    In this work we show the design and implementation of an autonomous sample switcher device to be used as a part of the experimental set up in transmission Mössbauer spectroscopy, which can be extended to other spectroscopic techniques employing radioactive sources. The changer is intended to minimize radiation exposure times to the users or technical staff and to optimize the use of radioactive sources without compromising the resolution of measurements or spectra. This proposal is motivated firstly by the potential hazards arising from the use of radioactive sources and secondly by the expensive costs involved, and in other cases the short life times, where a suitable and optimum use of the sources is crucial. The switcher system includes a PIC microcontroller for simple tasks involving sample displacement and positioning, in addition to a virtual instrument developed by using LabView. The shuffle of the samples proceeds in a sequential way based on the number of counts and the signal to noise ratio as selection criteria whereas the virtual instrument allows performing} a remote monitoring from a PC via Internet about the status of the spectra and to take control decisions. As an example, we show a case study involving a series of akaganeite samples. An efficiency and economical analysis is finally presented and discussed.

  17. [Design and optimization of wireless power and data transmission for visual prosthesis].

    PubMed

    Lei, Xuping; Wu, Kaijie; Zhao, Lei; Chai, Xinyu

    2013-11-01

    Boosting spatial resolution of visual prostheses is an effective method to improve implant subjects' visual perception. However, power consumption of visual implants greatly rises with the increasing number of implanted electrodes. In respond to this trend, visual prostheses need to develop high-efficiency wireless power transmission and high-speed data transmission. This paper presents a review of current research progress on wireless power and data transmission for visual prostheses, analyzes relative principles and requirement, and introduces design methods of power and data transmission.

  18. Complementary aspects of spatial resolution and signal-to-noise ratio in computational imaging

    NASA Astrophysics Data System (ADS)

    Gureyev, T. E.; Paganin, D. M.; Kozlov, A.; Nesterets, Ya. I.; Quiney, H. M.

    2018-05-01

    A generic computational imaging setup is considered which assumes sequential illumination of a semitransparent object by an arbitrary set of structured coherent illumination patterns. For each incident illumination pattern, all transmitted light is collected by a photon-counting bucket (single-pixel) detector. The transmission coefficients measured in this way are then used to reconstruct the spatial distribution of the object's projected transmission. It is demonstrated that the square of the spatial resolution of such a setup is usually equal to the ratio of the image area to the number of linearly independent illumination patterns. If the noise in the measured transmission coefficients is dominated by photon shot noise, then the ratio of the square of the mean signal to the noise variance is proportional to the ratio of the mean number of registered photons to the number of illumination patterns. The signal-to-noise ratio in a reconstructed transmission distribution is always lower if the illumination patterns are nonorthogonal, because of spatial correlations in the measured data. Examples of imaging methods relevant to the presented analysis include conventional imaging with a pixelated detector, computational ghost imaging, compressive sensing, super-resolution imaging, and computed tomography.

  19. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  20. Guided particle swarm optimization method to solve general nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Abdelhalim, Alyaa; Nakata, Kazuhide; El-Alem, Mahmoud; Eltawil, Amr

    2018-04-01

    The development of hybrid algorithms is becoming an important topic in the global optimization research area. This article proposes a new technique in hybridizing the particle swarm optimization (PSO) algorithm and the Nelder-Mead (NM) simplex search algorithm to solve general nonlinear unconstrained optimization problems. Unlike traditional hybrid methods, the proposed method hybridizes the NM algorithm inside the PSO to improve the velocities and positions of the particles iteratively. The new hybridization considers the PSO algorithm and NM algorithm as one heuristic, not in a sequential or hierarchical manner. The NM algorithm is applied to improve the initial random solution of the PSO algorithm and iteratively in every step to improve the overall performance of the method. The performance of the proposed method was tested over 20 optimization test functions with varying dimensions. Comprehensive comparisons with other methods in the literature indicate that the proposed solution method is promising and competitive.

  1. Optimal control of diarrhea transmission in a flood evacuation zone

    NASA Astrophysics Data System (ADS)

    Erwina, N.; Aldila, D.; Soewono, E.

    2014-03-01

    Evacuation of residents and diarrhea disease outbreak in evacuation zone have become serious problem that frequently happened during flood periods. Limited clean water supply and infrastructure in evacuation zone contribute to a critical spread of diarrhea. Transmission of diarrhea disease can be reduced by controlling clean water supply and treating diarrhea patients properly. These treatments require significant amount of budget, which may not be fulfilled in the fields. In his paper, transmission of diarrhea disease in evacuation zone using SIRS model is presented as control optimum problem with clean water supply and rate of treated patients as input controls. Existence and stability of equilibrium points and sensitivity analysis are investigated analytically for constant input controls. Optimum clean water supply and rate of treatment are found using optimum control technique. Optimal results for transmission of diarrhea and the corresponding controls during the period of observation are simulated numerically. The optimum result shows that transmission of diarrhea disease can be controlled with proper combination of water supply and rate of treatment within allowable budget.

  2. Estimating West Nile virus transmission period in Pennsylvania using an optimized degree-day model.

    PubMed

    Chen, Shi; Blanford, Justine I; Fleischer, Shelby J; Hutchinson, Michael; Saunders, Michael C; Thomas, Matthew B

    2013-07-01

    Abstract We provide calibrated degree-day models to predict potential West Nile virus (WNV) transmission periods in Pennsylvania. We begin by following the standard approach of treating the degree-days necessary for the virus to complete the extrinsic incubation period (EIP), and mosquito longevity as constants. This approach failed to adequately explain virus transmission periods based on mosquito surveillance data from 4 locations (Harrisburg, Philadelphia, Pittsburgh, and Williamsport) in Pennsylvania from 2002 to 2008. Allowing the EIP and adult longevity to vary across time and space improved model fit substantially. The calibrated models increase the ability to successfully predict the WNV transmission period in Pennsylvania to 70-80% compared to less than 30% in the uncalibrated model. Model validation showed the optimized models to be robust in 3 of the locations, although still showing errors for Philadelphia. These models and methods could provide useful tools to predict WNV transmission period from surveillance datasets, assess potential WNV risk, and make informed mosquito surveillance strategies.

  3. Research on UAV Intelligent Obstacle Avoidance Technology During Inspection of Transmission Line

    NASA Astrophysics Data System (ADS)

    Wei, Chuanhu; Zhang, Fei; Yin, Chaoyuan; Liu, Yue; Liu, Liang; Li, Zongyu; Wang, Wanguo

    Autonomous obstacle avoidance of unmanned aerial vehicle (hereinafter referred to as UAV) in electric power line inspection process has important significance for operation safety and economy for UAV intelligent inspection system of transmission line as main content of UAV intelligent inspection system on transmission line. In the paper, principles of UAV inspection obstacle avoidance technology of transmission line are introduced. UAV inspection obstacle avoidance technology based on particle swarm global optimization algorithm is proposed after common obstacle avoidance technologies are studied. Stimulation comparison is implemented with traditional UAV inspection obstacle avoidance technology which adopts artificial potential field method. Results show that UAV inspection strategy of particle swarm optimization algorithm, adopted in the paper, is prominently better than UAV inspection strategy of artificial potential field method in the aspects of obstacle avoidance effect and the ability of returning to preset inspection track after passing through the obstacle. An effective method is provided for UAV inspection obstacle avoidance of transmission line.

  4. Constrained simultaneous multi-state reconfigurable wing structure configuration optimization

    NASA Astrophysics Data System (ADS)

    Snyder, Matthew

    A reconfigurable aircraft is capable of in-flight shape change to increase mission performance or provide multi-mission capability. Reconfigurability has always been a consideration in aircraft design, from the Wright Flyer, to the F-14, and most recently the Lockheed-Martin folding wing concept. The Wright Flyer used wing-warping for roll control, the F-14 had a variable-sweep wing to improve supersonic flight capabilities, and the Lockheed-Martin folding wing demonstrated radical in-flight shape change. This dissertation will examine two questions that aircraft reconfigurability raises, especially as reconfiguration increases in complexity. First, is there an efficient method to develop a light weight structure which supports all the loads generated by each configuration? Second, can this method include the capability to propose a sub-structure topology that weighs less than other considered designs? The first question requires a method that will design and optimize multiple configurations of a reconfigurable aerostructure. Three options exist, this dissertation will show one is better than the others. Simultaneous optimization considers all configurations and their respective load cases and constraints at the same time. Another method is sequential optimization which considers each configuration of the vehicle one after the other - with the optimum design variable values from the first configuration becoming the lower bounds for subsequent configurations. This process repeats for each considered configuration and the lower bounds update as necessary. The third approach is aggregate combination — this method keeps the thickness or area of each member for the most critical configuration, the configuration that requires the largest cross-section. This research will show that simultaneous optimization produces a lower weight and different topology for the considered structures when compared to the sequential and aggregate techniques. To answer the second question, the developed optimization algorithm combines simultaneous optimization with a new method for determining the optimum location of the structural members of the sub-structure. The method proposed here considers an over-populated structural model, one in which there are initially more members than necessary. Using a unique iterative process, the optimization algorithm removes members from the design if they do not carry enough load to justify their presence. The initial set of members includes ribs, spars and a series of cross-members that diagonally connect the ribs and spars. The final result is a different structure, which is lower weight than one developed from sequential optimization or aggregate combination, and suggests the primary load paths. Chapter 1 contains background information on reconfigurable aircraft and a description of the new reconfigurable air vehicle being considered by the Air Vehicles Directorate of the Air Force Research Laboratory. This vehicle serves as a platform to test the proposed optimization process. Chapters 2 and 3 overview the optimization method and Chapter 4 provides some background analysis which is unique to this particular reconfigurable air vehicle. Chapter 5 contains the results of the optimizations and demonstrates how changing constraints or initial configuration impacts the final weight and topology of the wing structure. The final chapter contains conclusions and comments on some future work which would further enhance the effectiveness of the simultaneous reconfigurable structural topology optimization process developed and used in this dissertation.

  5. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  6. Optimizing ITO for incorporation into multilayer thin film stacks for visible and NIR applications

    NASA Astrophysics Data System (ADS)

    Roschuk, Tyler; Taddeo, David; Levita, Zachary; Morrish, Alan; Brown, Douglas

    2017-05-01

    Indium Tin Oxide, ITO, is the industry standard for transparent conductive coatings. As such, the common metrics for characterizing ITO performance are its transmission and conductivity/resistivity (or sheet resistance). In spite of its recurrent use in a broad range of technological applications, the performance of ITO itself is highly variable, depending on the method of deposition and chamber conditions, and a single well defined set of properties does not exist. This poses particular challenges for the incorporation of ITO in complex optical multilayer stacks while trying to maintain electronic performance. Complicating matters further, ITO suffers increased absorption losses in the NIR - making the ability to incorporate ITO into anti-reflective stacks crucial to optimizing overall optical performance when ITO is used in real world applications. In this work, we discuss the use of ITO in multilayer thin film stacks for applications from the visible to the NIR. In the NIR, we discuss methods to analyze and fine tune the film properties to account for, and minimize, losses due to absorption and to optimize the overall transmission of the multilayer stacks. The ability to obtain high transmission while maintaining good electrical properties, specifically low resistivity, is demonstrated. Trade-offs between transmission and conductivity with variation of process parameters are discussed in light of optimizing the performance of the final optical stack and not just with consideration to the ITO film itself.

  7. An efficiency study of the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.; Wu, Zhiqi; Sobieski, Jaroslaw

    1995-01-01

    The efficiency of the Simultaneous Analysis and Design (SAND) approach in the minimum weight optimization of structural systems subject to strength and displacement constraints as well as size side constraints is investigated. SAND allows for an optimization to take place in one single operation as opposed to the more traditional and sequential Nested Analysis and Design (NAND) method, where analyses and optimizations alternate. Thus, SAND has the advantage that the stiffness matrix is never factored during the optimization retaining its original sparsity. One of SAND's disadvantages is the increase in the number of design variables and in the associated number of constraint gradient evaluations. If SAND is to be an acceptable player in the optimization field, it is essential to investigate the efficiency of the method and to present a possible cure for any inherent deficiencies.

  8. Backward bifurcation and optimal control of Plasmodium Knowlesi malaria

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Hasan, Yahya Abu; Abdullah, Farah Aini

    2014-07-01

    A deterministic model for the transmission dynamics of Plasmodium Knowlesi malaria with direct transmission is developed. The model is analyzed using dynamical system techniques and it shows that the backward bifurcation occurs for some range of parameters. The model is extended to assess the impact of time dependent preventive (biological and chemical control) against the mosquitoes and vaccination for susceptible humans, while treatment for infected humans. The existence of optimal control is established analytically by the use of optimal control theory. Numerical simulations of the problem, suggest that applying the four control measure can effectively reduce if not eliminate the spread of Plasmodium Knowlesi in a community.

  9. Rapid design and optimization of low-thrust rendezvous/interception trajectory for asteroid deflection missions

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Zhu, Yongsheng; Wang, Yukai

    2014-02-01

    Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.

  10. Optical design of system for a lightship

    NASA Astrophysics Data System (ADS)

    Chirkov, M. A.; Tsyganok, E. A.

    2017-06-01

    This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode

  11. Sequential ultrasound and low-temperature thermal pretreatment: Process optimization and influence on sewage sludge solubilization, enzyme activity and anaerobic digestion.

    PubMed

    Neumann, Patricio; González, Zenón; Vidal, Gladys

    2017-06-01

    The influence of sequential ultrasound and low-temperature (55°C) thermal pretreatment on sewage sludge solubilization, enzyme activity and anaerobic digestion was assessed. The pretreatment led to significant increases of 427-1030% and 230-674% in the soluble concentrations of carbohydrates and proteins, respectively, and 1.6-4.3 times higher enzymatic activities in the soluble phase of the sludge. Optimal conditions for chemical oxygen demand solubilization were determined at 59.3kg/L total solids (TS) concentration, 30,500kJ/kg TS specific energy and 13h thermal treatment time using response surface methodology. The methane yield after pretreatment increased up to 50% compared with the raw sewage sludge, whereas the maximum methane production rate was 1.3-1.8 times higher. An energy assessment showed that the increased methane yield compensated for energy consumption only under conditions where 500kJ/kg TS specific energy was used for ultrasound, with up to 24% higher electricity recovery. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  13. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  14. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: results of the branch trial

    PubMed Central

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R.; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-01-01

    Background We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. Patients and methods This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. Results The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%–65%). Neutropenia was the most common grade ≥3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%–89%) and 85% (95%CI, 69%–93%), respectively, for the sequential-schedule. Conclusions These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC. PMID:26320185

  15. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  16. Optimal packing for cascaded regenerative transmission based on phase sensitive amplifiers.

    PubMed

    Sorokina, Mariia; Sygletos, Stylianos; Ellis, Andrew D; Turitsyn, Sergei

    2013-12-16

    We investigate the transmission performance of advanced modulation formats in nonlinear regenerative channels based on cascaded phase sensitive amplifiers. We identify the impact of amplitude and phase noise dynamics along the transmission line and show that after a cascade of regenerators, densely packed single ring PSK constellations outperform multi-ring constellations. The results of this study will greatly simplify the design of future nonlinear regenerative channels for ultra-high capacity transmission.

  17. Optimizing Tactics for Use of the U.S. Antiviral Strategic National Stockpile for Pandemic Influenza

    PubMed Central

    Dimitrov, Nedialko B.; Goll, Sebastian; Hupert, Nathaniel; Pourbohloul, Babak; Meyers, Lauren Ancel

    2011-01-01

    In 2009, public health agencies across the globe worked to mitigate the impact of the swine-origin influenza A (pH1N1) virus. These efforts included intensified surveillance, social distancing, hygiene measures, and the targeted use of antiviral medications to prevent infection (prophylaxis). In addition, aggressive antiviral treatment was recommended for certain patient subgroups to reduce the severity and duration of symptoms. To assist States and other localities meet these needs, the U.S. Government distributed a quarter of the antiviral medications in the Strategic National Stockpile within weeks of the pandemic's start. However, there are no quantitative models guiding the geo-temporal distribution of the remainder of the Stockpile in relation to pandemic spread or severity. We present a tactical optimization model for distributing this stockpile for treatment of infected cases during the early stages of a pandemic like 2009 pH1N1, prior to the wide availability of a strain-specific vaccine. Our optimization method efficiently searches large sets of intervention strategies applied to a stochastic network model of pandemic influenza transmission within and among U.S. cities. The resulting optimized strategies depend on the transmissability of the virus and postulated rates of antiviral uptake and wastage (through misallocation or loss). Our results suggest that an aggressive community-based antiviral treatment strategy involving early, widespread, pro-rata distribution of antivirals to States can contribute to slowing the transmission of mildly transmissible strains, like pH1N1. For more highly transmissible strains, outcomes of antiviral use are more heavily impacted by choice of distribution intervals, quantities per shipment, and timing of shipments in relation to pandemic spread. This study supports previous modeling results suggesting that appropriate antiviral treatment may be an effective mitigation strategy during the early stages of future influenza pandemics, increasing the need for systematic efforts to optimize distribution strategies and provide tactical guidance for public health policy-makers. PMID:21283514

  18. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, Rick; Wendt, Fabian; Musial, Walter

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less

  19. A Comparison of Right Unilateral and Sequential Bilateral Repetitive Transcranial Magnetic Stimulation for Major Depression: A Naturalistic Clinical Australian Study.

    PubMed

    Galletly, Cherrie A; Carnell, Benjamin L; Clarke, Patrick; Gill, Shane

    2017-03-01

    A great deal of research has established the efficacy of repetitive transcranial magnetic stimulation (rTMS) in the treatment of depression. However, questions remain about the optimal method to deliver treatment. One area requiring consideration is the difference in efficacy between bilateral and unilateral treatment protocols. This study aimed to compare the effectiveness of sequential bilateral rTMS and right unilateral rTMS. A total of 135 patients participated in the study, receiving either bilateral rTMS (N = 57) or right unilateral rTMS (N = 78). Treatment response was assessed using the Hamilton depression rating scale. Sequential bilateral rTMS had a higher response rate than right unilateral (43.9% vs 30.8%), but this difference was not statistically significant. This was also the case for remission rates (33.3% vs 21.8%, respectively). Controlling for pretreatment severity of depression, the results did not indicate a significant difference between the protocols with regard to posttreatment Hamilton depression rating scale scores. The current study found no statistically significant differences in response and remission rates between sequential bilateral rTMS and right unilateral rTMS. Given the shorter treatment time and the greater safety and tolerability of right unilateral rTMS, this may be a better choice than bilateral treatment in clinical settings.

  20. A Multi-Attribute Pheromone Ant Secure Routing Algorithm Based on Reputation Value for Sensor Networks

    PubMed Central

    Zhang, Lin; Yin, Na; Fu, Xiong; Lin, Qiaomin; Wang, Ruchuan

    2017-01-01

    With the development of wireless sensor networks, certain network problems have become more prominent, such as limited node resources, low data transmission security, and short network life cycles. To solve these problems effectively, it is important to design an efficient and trusted secure routing algorithm for wireless sensor networks. Traditional ant-colony optimization algorithms exhibit only local convergence, without considering the residual energy of the nodes and many other problems. This paper introduces a multi-attribute pheromone ant secure routing algorithm based on reputation value (MPASR). This algorithm can reduce the energy consumption of a network and improve the reliability of the nodes’ reputations by filtering nodes with higher coincidence rates and improving the method used to update the nodes’ communication behaviors. At the same time, the node reputation value, the residual node energy and the transmission delay are combined to formulate a synthetic pheromone that is used in the formula for calculating the random proportion rule in traditional ant-colony optimization to select the optimal data transmission path. Simulation results show that the improved algorithm can increase both the security of data transmission and the quality of routing service. PMID:28282894

  1. Design and control of a novel two-speed Uninterrupted Mechanical Transmission for electric vehicles

    NASA Astrophysics Data System (ADS)

    Fang, Shengnan; Song, Jian; Song, Haijun; Tai, Yuzhuo; Li, Fei; Sinh Nguyen, Truong

    2016-06-01

    Conventional all-electric vehicles (EV) adopt single-speed transmission due to its low cost and simple construction. However, with the adoption of this type of driveline system, development of EV technology leads to the growing performance requirements of drive motor. Introducing a multi-speed or two-speed transmission to EV offers the possibility of efficiency improvement of the whole powertrain. This paper presents an innovative two-speed Uninterrupted Mechanical Transmission (UMT), which consists of an epicyclic gearing system, a centrifugal clutch and a brake band, allowing the seamless shifting between two gears. Besides, driver's intention is recognized by the control system which is based on fuzzy logic controller (FLC), utilizing the signals of vehicle velocity and accelerator pedal position. The novel UMT shows better dynamic and comfort performance in compare with the optimized AMT with the same gear ratios. Comparison between the control strategy with recognition of driver intention and the conventional two-parameter gear shifting strategy is presented. And the simulation and analysis of the middle layer of optimal gearshift control algorithm is detailed. The results indicate that the UMT adopting FLC and optimal control method provides a significant improvement of energy efficiency, dynamic performance and shifting comfort for EV.

  2. Effects of simultaneous and optimized sequential cardiac resynchronization therapy on myocardial oxidative metabolism and efficiency.

    PubMed

    Christenson, Stuart D; Chareonthaitawee, Panithaya; Burnes, John E; Hill, Michael R S; Kemp, Brad J; Khandheria, Bijoy K; Hayes, David L; Gibbons, Raymond J

    2008-02-01

    Cardiac resynchronization therapy (CRT) can improve left ventricular (LV) hemodynamics and function. Recent data suggest the energy cost of such improvement is favorable. The effects of sequential CRT on myocardial oxidative metabolism (MVO(2)) and efficiency have not been previously assessed. Eight patients with NYHA class III heart failure were studied 196 +/- 180 days after CRT implant. Dynamic [(11)C]acetate positron emission tomography (PET) and echocardiography were performed after 1 hour of: 1) AAI pacing, 2) simultaneous CRT, and 3) sequential CRT. MVO(2) was calculated using the monoexponential clearance rate of [(11)C]acetate (k(mono)). Myocardial efficiency was expressed in terms of the work metabolic index (WMI). P values represent overall significance from repeated measures analysis. Global LV and right ventricular (RV) MVO(2) were not significantly different between pacing modes, but the septal/lateral MVO(2) ratio differed significantly with the change in pacing mode (AAI pacing = 0.696 +/- 0.094 min(-1), simultaneous CRT = 0.975 +/- 0.143 min(-1), and sequential CRT = 0.938 +/- 0.189 min(-1); overall P = 0.001). Stroke volume index (SVI) (AAI pacing = 26.7 +/- 10.4 mL/m(2), simultaneous CRT = 30.6 +/- 11.2 mL/m(2), sequential CRT = 33.5 +/- 12.2 mL/m(2); overall P < 0.001) and WMI (AAI pacing = 3.29 +/- 1.34 mmHg*mL/m(2)*10(6), simultaneous CRT = 4.29 +/- 1.72 mmHg*mL/m(2)*10(6), sequential CRT = 4.79 +/- 1.92 mmHg*mL/m(2)*10(6); overall P = 0.002) also differed between pacing modes. Compared with simultaneous CRT, additional changes in septal/lateral MVO(2), SVI, and WMI with sequential CRT were not statistically significant on post hoc analysis. In this small selected population, CRT increases LV SVI without increasing MVO(2), resulting in improved myocardial efficiency. Additional improvements in LV work, oxidative metabolism, and efficiency from simultaneous to sequential CRT were not significant.

  3. Three-phase receiving coil of wireless power transmission system for gastrointestinal robot

    NASA Astrophysics Data System (ADS)

    Jia, Z. W.; Jiang, T.; Liu, Y.

    2017-11-01

    Power shortage is the bottleneck for the wide application of gastrointestinal (GI) robot. Owing to the limited volume and free change of orientation of the receiving set in GI trace, the optimal of receiving set is the key point to promote the transmission efficiency of wireless power transmission system. A new type of receiving set, similar to the winding of three-phase asynchronous motor, is presented and compared with the original three-dimensional orthogonal coil. Considering the given volume and the space utilization ratio, the three-phase and the three-orthogonal ones are the parameters which are optimized and compared. Both the transmission efficiency and stability are analyzed and verified by in vitro experiments. Animal experiments show that the new one could provide at least 420 mW power in volume of Φ11 × 13mm with a uniformity of 78.3% for the GI robot.

  4. An inverse radiation model for optical determination of temperature and species concentration: Development and validation

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik

    2015-01-01

    In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.

  5. Optimal Capacity Proportion and Distribution Planning of Wind, Photovoltaic and Hydro Power in Bundled Transmission System

    NASA Astrophysics Data System (ADS)

    Ye, X.; Tang, Q.; Li, T.; Wang, Y. L.; Zhang, X.; Ye, S. Y.

    2017-05-01

    The wind, photovoltaic and hydro power bundled transmission system attends to become common in Northwest and Southwest of China. To make better use of the power complementary characteristic of different power sources, the installed capacity proportion of wind, photovoltaic and hydro power, and their capacity distribution for each integration node is a significant issue to be solved in power system planning stage. An optimal capacity proportion and capacity distribution model for wind, photovoltaic and hydro power bundled transmission system is proposed here, which considers the power out characteristic of power resources with different type and in different area based on real operation data. The transmission capacity limit of power grid is also considered in this paper. Simulation cases are tested referring to one real regional system in Southwest China for planning level year 2020. The results verify the effectiveness of the model in this paper.

  6. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  7. DE and NLP Based QPLS Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo

    As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.

  8. Integrated Controls-Structures Design Methodology for Flexible Spacecraft

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Joshi, S. M.; Price, D. B.

    1995-01-01

    This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.

  9. Optimal landing of a helicopter in autorotation

    NASA Technical Reports Server (NTRS)

    Lee, A. Y. N.

    1985-01-01

    Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.

  10. Analysis and optimization of population annealing

    NASA Astrophysics Data System (ADS)

    Amey, Christopher; Machta, Jonathan

    2018-03-01

    Population annealing is an easily parallelizable sequential Monte Carlo algorithm that is well suited for simulating the equilibrium properties of systems with rough free-energy landscapes. In this work we seek to understand and improve the performance of population annealing. We derive several useful relations between quantities that describe the performance of population annealing and use these relations to suggest methods to optimize the algorithm. These optimization methods were tested by performing large-scale simulations of the three-dimensional (3D) Edwards-Anderson (Ising) spin glass and measuring several observables. The optimization methods were found to substantially decrease the amount of computational work necessary as compared to previously used, unoptimized versions of population annealing. We also obtain more accurate values of several important observables for the 3D Edwards-Anderson model.

  11. Optimal placement of FACTS devices using optimization techniques: A review

    NASA Astrophysics Data System (ADS)

    Gaur, Dipesh; Mathew, Lini

    2018-03-01

    Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.

  12. Monolithic high voltage nonlinear transmission line fabrication process

    DOEpatents

    Cooper, G.A.

    1994-10-04

    A process for fabricating sequential inductors and varistor diodes of a monolithic, high voltage, nonlinear, transmission line in GaAs is disclosed. An epitaxially grown laminate is produced by applying a low doped active n-type GaAs layer to an n-plus type GaAs substrate. A heavily doped p-type GaAs layer is applied to the active n-type layer and a heavily doped n-type GaAs layer is applied to the p-type layer. Ohmic contacts are applied to the heavily doped n-type layer where diodes are desired. Multiple layers are then either etched away or Oxygen ion implanted to isolate individual varistor diodes. An insulator is applied between the diodes and a conductive/inductive layer is thereafter applied on top of the insulator layer to complete the process. 6 figs.

  13. PARASITES AND POVERTY: THE CASE OF SCHISTOSOMIASIS

    PubMed Central

    King, Charles H.

    2009-01-01

    Simultaneous and sequential transmission of multiple parasites, and their resultant overlapping chronic infections, are facts of life in many underdeveloped rural areas. These represent significant but often poorly-measured health and economic burdens for affected populations. For example, the chronic inflammatory process associated with long-term schistosomiasis contributes to anaemia and undernutrition, which, in turn, can lead to growth stunting, poor school performance, poor work productivity, and continued poverty. To date, most national and international programs aimed at parasite control have not considered the varied economic and ecological factors underlying multi-parasite transmission, but some are beginning to provide a coordinated approach to control. In addition, interest is emerging in new studies for the re-evaluation and recalibration of the health burden of helminthic parasite infection. Their results should highlight the strong potential of integrated parasite control in efforts for poverty reduction. PMID:19962954

  14. Patient-to-patient transmission of hepatitis C virus (HCV) during colonoscopy diagnosis

    PubMed Central

    2010-01-01

    Background No recognized risk factors can be identified in 10-40% of hepatitis C virus (HCV)-infected patients suggesting that the modes of transmission involved could be underestimated or unidentified. Invasive diagnostic procedures, such as endoscopy, have been considered as a potential HCV transmission route; although the actual extent of transmission in endoscopy procedures remains controversial. Most reported HCV outbreaks related to nosocomial acquisition have been attributed to unsafe injection practices and use of multi-dose vials. Only a few cases of likely patient-to-patient HCV transmission via a contaminated colonoscope have been reported to date. Nosocomial HCV infection may have important medical and legal implications and, therefore, possible transmission routes should be investigated. In this study, a case of nosocomial transmission of HCV from a common source to two patients who underwent colonoscopy in an endoscopy unit is reported. Results A retrospective epidemiological search after detection of index cases revealed several potentially infective procedures: sample blood collection, use of a peripheral catheter, anesthesia and colonoscopy procedures. The epidemiological investigation showed breaches in colonoscope reprocessing and deficiencies in the recording of valuable tracing data. Direct sequences from the NS5B region were obtained to determine the extent of the outbreak and cloned sequences from the E1-E2 region were used to establish the relationships among intrapatient viral populations. Phylogenetic analyses of individual sequences from viral populations infecting the three patients involved in the outbreak confirmed the patient pointed out by the epidemiological search as the source of the outbreak. Furthermore, the sequential order in which the patients underwent colonoscopy correlates with viral genetic variability estimates. Conclusions Patient-to-patient transmission of HCV could be demonstrated although the precise route of transmission remained unclear. Viral genetic variability is proposed as a useful tool for tracing HCV transmission, especially in recent transmissions. PMID:20825635

  15. Phylogenetic analysis accounting for age-dependent death and sampling with applications to epidemics.

    PubMed

    Lambert, Amaury; Alexander, Helen K; Stadler, Tanja

    2014-07-07

    The reconstruction of phylogenetic trees based on viral genetic sequence data sequentially sampled from an epidemic provides estimates of the past transmission dynamics, by fitting epidemiological models to these trees. To our knowledge, none of the epidemiological models currently used in phylogenetics can account for recovery rates and sampling rates dependent on the time elapsed since transmission, i.e. age of infection. Here we introduce an epidemiological model where infectives leave the epidemic, by either recovery or sampling, after some random time which may follow an arbitrary distribution. We derive an expression for the likelihood of the phylogenetic tree of sampled infectives under our general epidemiological model. The analytic concept developed in this paper will facilitate inference of past epidemiological dynamics and provide an analytical framework for performing very efficient simulations of phylogenetic trees under our model. The main idea of our analytic study is that the non-Markovian epidemiological model giving rise to phylogenetic trees growing vertically as time goes by can be represented by a Markovian "coalescent point process" growing horizontally by the sequential addition of pairs of coalescence and sampling times. As examples, we discuss two special cases of our general model, described in terms of influenza and HIV epidemics. Though phrased in epidemiological terms, our framework can also be used for instance to fit macroevolutionary models to phylogenies of extant and extinct species, accounting for general species lifetime distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Sequential hydrogen and methane coproduction from sugary wastewater treatment by "CSTRHyd-UASBMet" system

    NASA Astrophysics Data System (ADS)

    Hao, Ping

    2017-10-01

    Potentiality of sequential hydrogen bioproduction from sugary wastewater treatment was investigated using continuous stirred tank reactor (CSTR) for various substrate COD concentrations and HRTs. At optimum substrate concentration of 6 g COD/L, hydrogen could be efficiently produced from CSTR with the highest production rate of 3.00 (±0.04) L/L reactor d at HRT of 6 h. The up flow anaerobic sludge bed (UASB) reactor was used for continuous methane bioproduction from the effluents of hydrogen bioproduction. At optimal HRT 12 h, methane could be produced with a production rate of 2.27 (±0.08) L/L reactor d and the COD removal efficiency reached up to the maximum 82.3%.

  17. Predicting optimal transmission investment in malaria parasites.

    PubMed

    Greischar, Megan A; Mideo, Nicole; Read, Andrew F; Bjørnstad, Ottar N

    2016-07-01

    In vertebrate hosts, malaria parasites face a tradeoff between replicating and the production of transmission stages that can be passed onto mosquitoes. This tradeoff is analogous to growth-reproduction tradeoffs in multicellular organisms. We use a mathematical model tailored to the life cycle and dynamics of malaria parasites to identify allocation strategies that maximize cumulative transmission potential to mosquitoes. We show that plastic strategies can substantially outperform fixed allocation because parasites can achieve greater fitness by investing in proliferation early and delaying the production of transmission stages. Parasites should further benefit from restraining transmission investment later in infection, because such a strategy can help maintain parasite numbers in the face of resource depletion. Early allocation decisions are predicted to have the greatest impact on parasite fitness. If the immune response saturates as parasite numbers increase, parasites should benefit from even longer delays prior to transmission investment. The presence of a competing strain selects for consistently lower levels of transmission investment and dramatically increased exploitation of the red blood cell resource. While we provide a detailed analysis of tradeoffs pertaining to malaria life history, our approach for identifying optimal plastic allocation strategies may be broadly applicable. © 2016 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  18. Optimal control of HIV/AIDS dynamic: Education and treatment

    NASA Astrophysics Data System (ADS)

    Sule, Amiru; Abdullah, Farah Aini

    2014-07-01

    A mathematical model which describes the transmission dynamics of HIV/AIDS is developed. The optimal control representing education and treatment for this model is explored. The existence of optimal Control is established analytically by the use of optimal control theory. Numerical simulations suggest that education and treatment for the infected has a positive impact on HIV/AIDS control.

  19. Optical Analysis of Transparent Polymeric Material Exposed to Simulated Space Environment

    NASA Technical Reports Server (NTRS)

    Edwards, David L.; Finckenor, Miria M.

    1999-01-01

    Transparent polymeric materials are being designed and utilized as solar concentrating lenses for spacecraft power and propulsion systems. These polymeric lenses concentrate solar energy onto energy conversion devices such as solar cells and thermal energy systems. The conversion efficiency is directly related to the transmissivity of the polymeric lens. The Environmental Effects Group of the Marshall Space Flight Center's Materials, Processes, and Manufacturing Department exposed a variety of materials to a simulated space environment and evaluated them for an, change in optical transmission. These materials include Lexan(TM), polyethylene terephthalate (PET). several formulations of Tefzel(TM). and Teflon(TM), and silicone DC 93-500. Samples were exposed to a minimum of 1000 Equivalent Sun Hours (ESH) of near-UV radiation (250 - 400 nm wavelength). Data will be presented on materials exposed to charged particle radiation equivalent to a five-year dose in geosynchronous orbit. These exposures were performed in MSFC's Combined Environmental Effects Test Chamber, a unique facility with the capability to expose materials simultaneously or sequentially to protons, low-energy electrons, high-energy electrons, near UV radiation and vacuum UV radiation.Prolonged exposure to the space environment will decrease the polymer film's transmission and thus reduce the conversion efficiency. A method was developed to normalize the transmission loss and thus rank the materials according to their tolerance to space environmental exposure. Spectral results and the material ranking according to transmission loss are presented.

  20. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  1. Sequential ultrasound-microwave assisted acid extraction (UMAE) of pectin from pomelo peels.

    PubMed

    Liew, Shan Qin; Ngoh, Gek Cheng; Yusoff, Rozita; Teoh, Wen Hui

    2016-12-01

    This study aims to optimize sequential ultrasound-microwave assisted extraction (UMAE) on pomelo peel using citric acid. The effects of pH, sonication time, microwave power and irradiation time on the yield and the degree of esterification (DE) of pectin were investigated. Under optimized conditions of pH 1.80, 27.52min sonication followed by 6.40min microwave irradiation at 643.44W, the yield and the DE value of pectin obtained was respectively at 38.00% and 56.88%. Based upon optimized UMAE condition, the pectin from microwave-ultrasound assisted extraction (MUAE), ultrasound assisted extraction (UAE) and microwave assisted extraction (MAE) were studied. The yield of pectin adopting the UMAE was higher than all other techniques in the order of UMAE>MUAE>MAE>UAE. The pectin's galacturonic acid content obtained from combined extraction technique is higher than that obtained from sole extraction technique and the pectin gel produced from various techniques exhibited a pseudoplastic behaviour. The morphological structures of pectin extracted from MUAE and MAE closely resemble each other. The extracted pectin from UMAE with smaller and more regular surface differs greatly from that of UAE. This has substantiated the highest pectin yield of 36.33% from UMAE and further signified their compatibility and potentiality in pectin extraction. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  3. Optimizing microwave photodetection: input-output theory

    NASA Astrophysics Data System (ADS)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  4. Mammographic x-ray unit kilovoltage test tool based on k-edge absorption effect.

    PubMed

    Napolitano, Mary E; Trueblood, Jon H; Hertel, Nolan E; David, George

    2002-09-01

    A simple tool to determine the peak kilovoltage (kVp) of a mammographic x-ray unit has been designed. Tool design is based on comparing the effect of k-edge discontinuity of the attenuation coefficient for a series of element filters. Compatibility with the mammography accreditation phantom (MAP) to obtain a single quality control film is a second design objective. When the attenuation of a series of sequential elements is studied simultaneously, differences in the absorption characteristics due to the k-edge discontinuities are more evident. Specifically, when the incident photon energy is higher than the k-edge energy of a number of the elements and lower than the remainder, an inflection may be seen in the resulting attenuation data. The maximum energy of the incident photon spectra may be determined based on this inflection point for a series of element filters. Monte Carlo photon transport analysis was used to estimate the photon transmission probabilities for each of the sequential k-edge filter elements. The photon transmission corresponds directly to optical density recorded on mammographic x-ray film. To observe the inflection, the element filters chosen must have k-edge energies that span a range greater than the expected range of the end point energies to be determined. For the design, incident x-ray spectra ranging from 25 to 40 kVp were assumed to be from a molybdenum target. Over this range, the k-edge energy changes by approximately 1.5 keV between sequential elements. For this design 21 elements spanning an energy range from 20 to 50 keV were chosen. Optimum filter element thicknesses were calculated to maximize attenuation differences at the k-edge while maintaining optical densities between 0.10 and 3.00. Calculated relative transmission data show that the kVp could be determined to within +/-1 kV. To obtain experimental data, a phantom was constructed containing 21 different elements placed in an acrylic holder. MAP images were used to determine appropriate exposure techniques for a series of end point energies from 25 to 35 kVp. The average difference between the kVp determination and the calibrated dial setting was 0.8 and 1.0 kV for a Senographe 600 T and a Senographe DMR, respectively. Since the k-edge absorption energies of the filter materials are well known, independent calibration or a series of calibration curves is not required.

  5. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  6. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  7. Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation

    PubMed Central

    Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk

    2015-01-01

    A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740

  8. Research on design method of the full form ship with minimum thrust deduction factor

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin

    2015-04-01

    In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.

  9. Divergent pathways to influence: Cognition and behavior differentially mediate the effects of optimism on physical and mental quality of life in Chinese university students.

    PubMed

    Ramsay, Jonathan E; Yang, Fang; Pang, Joyce S; Lai, Ching-Man; Ho, Roger Cm; Mak, Kwok-Kei

    2015-07-01

    Previous research has indicated that both cognitive and behavioral variables mediate the positive effect of optimism on quality of life; yet few attempts have been made to accommodate these constructs into a single explanatory framework. Adopting Fredrickson's broaden-and-build perspective, we examined the relationships between optimism, self-rated health, resilience, exercise, and quality of life in 365 Chinese university students using path analysis. For physical quality of life, a two-stage model, in which the effects of optimism were sequentially mediated by cognitive and behavioral variables, provided the best fit. A one-stage model, with full mediation by cognitive variables, provided the best fit for mental quality of life. This suggests that optimism influences physical and mental quality of life via different pathways. © The Author(s) 2013.

  10. Application of optimal control strategies to HIV-malaria co-infection dynamics

    NASA Astrophysics Data System (ADS)

    Fatmawati; Windarto; Hanif, Lathifah

    2018-03-01

    This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.

  11. Tri-state delta modulation system for Space Shuttle digital TV downlink

    NASA Technical Reports Server (NTRS)

    Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.

    1981-01-01

    Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.

  12. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less

  13. A protein-dependent side-chain rotamer library.

    PubMed

    Bhuyan, Md Shariful Islam; Gao, Xin

    2011-12-14

    Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.

  14. Sequential Injection/Electrochemical Immunoassay for Quantifying the Pesticide Metabolite 3, 5, 6-Trichloro-2-Pyridinol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Riechers, Shawn L.; Timchalk, Chuck

    2005-12-04

    An automated and sensitive sequential injection electrochemical immunoassay was developed to monitor a potential insecticide biomarker, 3, 5, 6-trichloro-2-pyridinol. The current method involved a sequential injection analysis (SIA) system equipped with a thin-layer electrochemical flow cell and permanent magnet, which was used to fix 3,5,6-trichloro-2-pyridinol (TCP) antibody coated magnetic beads (TCP-Ab-MBs) in the reaction zone. After competitive immunoreactions among TCP-Ab-MBs, TCP analyte, and horseradish peroxidase (HRP) labeled TCP, a 3, 3?, 5, 5?-tetramethylbenzidine dihydrochloride and hydrogen peroxide (TMB-H2O2) substrate solution was injected to produce an electroactive enzymatic product. The activity of HRP tracers was monitored by a square wave voltammetricmore » scanning electroactive enzymatic product in the thin-layer flow cell. The voltammetric characteristics of the substrate and the enzymatic product were investigated under batch conditions, and the parameters of the immunoassay were optimized in the SIA system. Under the optimal conditions, the system was used to measure as low as 6 ng L-1 (ppt) TCP, which is around 50-fold lower than the value indicated by the manufacturer of the TCP RaPID Assay? kit (0.25 ug/L, colorimetric detection). The performance of the developed immunoassay system was successfully evaluated on tap water and river water samples spiked with TCP. This technique could be readily used for detecting other environmental contaminants by developing specific antibodies against contaminants and is expected to open new opportunities for environmental and biological monitoring.« less

  15. Effect of Compression Devices on Preventing Deep Vein Thrombosis Among Adult Trauma Patients: A Systematic Review.

    PubMed

    Ibrahim, Mona; Ahmed, Azza; Mohamed, Warda Yousef; El-Sayed Abu Abduo, Somaya

    2015-01-01

    Trauma is the leading cause of death in Americans up to 44 years old each year. Deep vein thrombosis (DVT) is a significant condition occurring in trauma, and prophylaxis is essential to the appropriate management of trauma patients. The incidence of DVT varies in trauma patients, depending on patients' risk factors, modality of prophylaxis, and methods of detection. However, compression devices and arteriovenous (A-V) foot pumps prophylaxis are recommended in trauma patients, but the efficacy and optimal use of it is not well documented in the literature. The aim of this study was to review the literature on the effect of compression devices in preventing DVT among adult trauma patients. We searched through PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for eligible studies published from 1990 until June 2014. Reviewers identified all randomized controlled trials that satisfied the study criteria, and the quality of included studies was assessed by Cochrane risk of bias tool. Five randomized controlled trials were included with a total of 1072 patients. Sequential compression devices significantly reduced the incidence of DVT in trauma patients. Also, foot pumps were more effective in reducing incidence of DVT compared with sequential compression devices. Sequential compression devices and foot pumps reduced the incidence of DVT in trauma patients. However, the evidence is limited to a small sample size and did not take into account other confounding variables that may affect the incidence of DVT in trauma patients. Future randomized controlled trials with larger probability samples to investigate the optimal use of mechanical prophylaxis in trauma patients are needed.

  16. Sequential enzymatic derivatization coupled with online microdialysis sampling for simultaneous profiling of mouse tumor extracellular hydrogen peroxide, lactate, and glucose.

    PubMed

    Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang

    2017-03-01

    Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Utilization of group theory in studies of molecular clusters

    NASA Astrophysics Data System (ADS)

    Ocak, Mahir E.

    The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.

  18. Optimization problems in natural gas transportation systems. A state-of-the-art review

    DOE PAGES

    Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado

    2015-03-24

    Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-termmore » basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.« less

  19. Optimization problems in natural gas transportation systems. A state-of-the-art review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado

    Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-termmore » basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.« less

  20. WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, D; Balvert, M

    2016-06-15

    Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that themore » original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.« less

  1. Transmission performance of a wavelength and NRZ-to-RZ format conversion with pulsewidth tunability by combination of SOA- and fiber-based switches.

    PubMed

    Tan, Hung Nguyen; Matsuura, Motoharu; Kishi, Naoto

    2008-11-10

    An all-optical signal processing scheme coupling wavelength conversion and NRZ-to-RZ data format conversion with pulsewidth tunability into one by combination of SOA- and fiber-based switches, is experimentally demonstrated, and its transmission performance is investigated. An 1558 nm NRZ data signal is converted to RZ data format at 1546 nm with widely tunable pulsewidth from 20 % to 80 % duty cycle at the bit-rate of 10 Gb/s. The investigation on transmission performance of the converted RZ signals at each different pulsewidth is carried out over various standard single-mode fiber (SSMF) links up to 65 km long without dispersion compensation. The results clarify a significant improvement on transmission performance of converted signal in comparison with the conventional NRZ signal through tunable pulsewidth management and show the existence of an optimal pulsewidth for the RZ data format at each transmission distance with particular cumulative dispersion. The optimal pulsewidths of the converted RZ signal and its corresponding power penalties against the NRZ signal are also investigated in different SSMF links.

  2. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  3. Sequential fuzzy diagnosis method for motor roller bearing in variable operating conditions based on vibration analysis.

    PubMed

    Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi

    2013-06-21

    A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.

  4. Sequential Fuzzy Diagnosis Method for Motor Roller Bearing in Variable Operating Conditions Based on Vibration Analysis

    PubMed Central

    Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi

    2013-01-01

    A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021

  5. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  6. Data transmission system and method

    NASA Technical Reports Server (NTRS)

    Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)

    2010-01-01

    A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.

  7. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.

  8. Who acquires infection from whom and how? Disentangling multi-host and multi-mode transmission dynamics in the ‘elimination’ era

    PubMed Central

    Borlase, Anna; Rudge, James W.

    2017-01-01

    Multi-host infectious agents challenge our abilities to understand, predict and manage disease dynamics. Within this, many infectious agents are also able to use, simultaneously or sequentially, multiple modes of transmission. Furthermore, the relative importance of different host species and modes can itself be dynamic, with potential for switches and shifts in host range and/or transmission mode in response to changing selective pressures, such as those imposed by disease control interventions. The epidemiology of such multi-host, multi-mode infectious agents thereby can involve a multi-faceted community of definitive and intermediate/secondary hosts or vectors, often together with infectious stages in the environment, all of which may represent potential targets, as well as specific challenges, particularly where disease elimination is proposed. Here, we explore, focusing on examples from both human and animal pathogen systems, why and how we should aim to disentangle and quantify the relative importance of multi-host multi-mode infectious agent transmission dynamics under contrasting conditions, and ultimately, how this can be used to help achieve efficient and effective disease control. This article is part of the themed issue ‘Opening the black box: re-examining the ecology and evolution of parasite transmission’. PMID:28289259

  9. Upconverting nanoparticles for optimizing scintillator based detection systems

    DOEpatents

    Kross, Brian; McKisson, John E; McKisson, John; Weisenberger, Andrew; Xi, Wenze; Zom, Carl

    2013-09-17

    An upconverting device for a scintillation detection system is provided. The detection system comprises a scintillator material, a sensor, a light transmission path between the scintillator material and the sensor, and a plurality of upconverting nanoparticles particles positioned in the light transmission path.

  10. Self-optimization and auto-stabilization of receiver in DPSK transmission system.

    PubMed

    Jang, Y S

    2008-03-17

    We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.

  11. Investigation of evanescent coupling between tapered fiber and a multimode slab waveguide.

    PubMed

    Dong, Shaofei; Ding, Hui; Liu, Yiying; Qi, Xiaofeng

    2012-04-01

    A tapered fiber-slab waveguide coupler (TFSC) is proposed in this paper. Both the numerical analysis based on the beam propagation method and experiments are used for investigating the dependencies of TFSC transmission features on their geometric parameters. From the simulations and experimental results, the rules for fabricating a TFSC with low transmission loss and sharp resonant spectra by optimizing the configuration parameters are presented. The conclusions derived from our work may provide helpful references for optimally designing and fabricating TFSC-based devices, such as sensors, wavelength filters, and intensity modulators.

  12. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  13. Interfacing Computer Aided Parallelization and Performance Analysis

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)

    2003-01-01

    When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.

  14. Optimal control of parametric oscillations of compressed flexible bars

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.

  15. A new strategy to improve the cost-effectiveness of human immunodeficiency virus, hepatitis B virus, hepatitis C virus, and syphilis testing of blood donations in sub-Saharan Africa: a pilot study in Burkina Faso.

    PubMed

    Kania, Dramane; Sangaré, Lassana; Sakandé, Jean; Koanda, Abdoulaye; Nébié, Yacouba Kompingnin; Zerbo, Oumarou; Combasséré, Alain Wilfried; Guissou, Innocent Pierre; Rouet, François

    2009-10-01

    In Africa where blood-borne agents are highly prevalent, cheaper and feasible alternative strategies for blood donations testing are specifically required. From May to August 2002, 500 blood donations from Burkina Faso were tested for hepatitis B surface antigen (HBsAg), human immunodeficiency virus (HIV), syphilis, and hepatitis C virus (HCV) according to two distinct strategies. The first strategy was a conventional simultaneous screening of these four blood-borne infectious agents on each blood donation by using single-marker assays. The second strategy was a sequential screening starting by HBsAg. HBsAg-nonreactive blood donations were then further tested for HIV. If nonreactive, they were further tested for syphilis. If nonreactive, they were finally assessed for HCV antibodies. The accuracy and cost-effectiveness of the two strategies were compared. By using the simultaneous strategy, the seroprevalences of HBsAg, HIV, syphilis, and HCV among blood donors in Ouagadougou were estimated to be 19.2, 9.8, 1.6, and 5.2%. No significant difference of HIV, syphilis, and HCV prevalence rates was observed by using the sequential strategy (9.2, 1.9, and 4.7%, respectively). Whatever the strategy used, 157 blood donations (31.4%) were found to be reactive for at least one transfusion-transmissible agent and were thus discarded. The sequential strategy allowed a cost decrease of euro 908.6, compared to the simultaneous strategy. Given that approximately there are 50,000 blood donations annually in Burkina Faso, the money savings reached potentially euro 90,860. In resource-limited settings, the implementation of a sequential strategy appears as a pragmatic solution to promote safe blood supply and ensure sustainability of the system.

  16. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  17. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  18. Automated optimization of an aspheric light-emitting diode lens for uniform illumination.

    PubMed

    Luo, Xiaoxia; Liu, Hua; Lu, Zhenwu; Wang, Yao

    2011-07-10

    In this paper, an automated optimization method in the sequential mode of ZEMAX is proposed in the design of an aspheric lens with uniform illuminance for an LED source. A feedback modification is introduced in the design for the LED extended source. The user-defined merit function is written out by using ZEMAX programming language macros language and, as an example, optimum parameters of an aspheric lens are obtained via running an optimization. The optical simulation results show that the illumination efficiency and uniformity can reach 83% and 90%, respectively, on a target surface of 40 mm diameter and at 60 mm away for a 1×1 mm LED source. © 2011 Optical Society of America

  19. Optimal vaccination strategies and rational behaviour in seasonal epidemics.

    PubMed

    Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C

    2016-12-01

    We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.

  20. Optimal control of vaccination rate in an epidemiological model of Clostridium difficile transmission.

    PubMed

    Stephenson, Brittany; Lanzas, Cristina; Lenhart, Suzanne; Day, Judy

    2017-12-01

    The spore-forming, gram-negative bacteria Clostridium difficile can cause severe intestinal illness. A striking increase in the number of cases of C. difficile infection (CDI) among hospitals has highlighted the need to better understand how to prevent the spread of CDI. In our paper, we modify and update a compartmental model of nosocomial C. difficile transmission to include vaccination. We then apply optimal control theory to determine the time-varying optimal vaccination rate that minimizes a combination of disease prevalence and spread in the hospital population as well as cost, in terms of time and money, associated with vaccination. Various hospital scenarios are considered, such as times of increased antibiotic prescription rate and times of outbreak, to see how such scenarios modify the optimal vaccination rate. By comparing the values of the objective functional with constant vaccination rates to those with time-varying optimal vaccination rates, we illustrate the benefits of time-varying controls.

  1. Control of browning and microbial growth on fresh-cut apples by sequential treatment of sanitizers and calcium ascorbate.

    PubMed

    Wang, Hua; Feng, Hao; Luo, Yaguang

    2007-01-01

    This study investigated the efficacy of different sanitizers, including acidic electrolyzed water (AEW), peroxyacetic acid (POAA), and chlorine, on the inactivation of Escherichia coli O157:H7 on fresh-cut apples. The effects of the sanitizers and sequential treatments of AEW or POAA followed by calcium ascorbate (CaA) on browning inhibition and organoleptic qualities of fresh-cut apples stored under different package atmospheres at 4 degrees C were also evaluated. Changes in package atmosphere composition, product color, firmness, total aerobic bacterial counts, yeast and mold counts, and sensory qualities were examined at 0, 4, 8, 11, and 21 d. Among all sanitizer treatments, POAA and AEW achieved the highest reduction on E. coli O157:H7 populations. The sequential treatment of AEW followed by CaA (AEW-CaA) achieved the best overall dual control of browning and bacterial growth on fresh-cut apple wedges. Package atmospheres changed significantly over time and among package materials. Packages prepared with films having an oxygen transmission rate (OTR) of 158 had significantly lower O2 and higher CO2 partial pressures than those prepared with 225 OTR films and the Ziploc bags. The effect of package atmospheres on the browning of apples is more pronounced on AEW, POAA, and POAA-CaA-treated apple wedges than on AEW-CaA-treated samples.

  2. Mining local climate data to assess spatiotemporal dengue fever epidemic patterns in French Guiana

    PubMed Central

    Flamand, Claude; Fabregue, Mickael; Bringay, Sandra; Ardillon, Vanessa; Quénel, Philippe; Desenclos, Jean-Claude; Teisseire, Maguelonne

    2014-01-01

    Objective To identify local meteorological drivers of dengue fever in French Guiana, we applied an original data mining method to the available epidemiological and climatic data. Through this work, we also assessed the contribution of the data mining method to the understanding of factors associated with the dissemination of infectious diseases and their spatiotemporal spread. Methods We applied contextual sequential pattern extraction techniques to epidemiological and meteorological data to identify the most significant climatic factors for dengue fever, and we investigated the relevance of the extracted patterns for the early warning of dengue outbreaks in French Guiana. Results The maximum temperature, minimum relative humidity, global brilliance, and cumulative rainfall were identified as determinants of dengue outbreaks, and the precise intervals of their values and variations were quantified according to the epidemiologic context. The strongest significant correlations were observed between dengue incidence and meteorological drivers after a 4–6-week lag. Discussion We demonstrated the use of contextual sequential patterns to better understand the determinants of the spatiotemporal spread of dengue fever in French Guiana. Future work should integrate additional variables and explore the notion of neighborhood for extracting sequential patterns. Conclusions Dengue fever remains a major public health issue in French Guiana. The development of new methods to identify such specific characteristics becomes crucial in order to better understand and control spatiotemporal transmission. PMID:24549761

  3. A behavioural and neural evaluation of prospective decision-making under risk

    PubMed Central

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.

    2010-01-01

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single choice contexts there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal pre-determined strategy, irrespective of the particular order in which options are presented. An alternative model involves continuously re-evaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of re-evaluating decision utilities, where available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously-acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes. PMID:20980595

  4. Dysfunction of bulbar central pattern generator in ALS patients with dysphagia during sequential deglutition.

    PubMed

    Aydogdu, Ibrahim; Tanriverdi, Zeynep; Ertekin, Cumhur

    2011-06-01

    The aim of this study is to investigate a probable dysfunction of the central pattern generator (CPG) in dysphagic patients with ALS. We investigated 58 patients with ALS, 23 patients with PD, and 33 normal subjects. The laryngeal movements and EMG of the submental muscles were recorded during sequential water swallowing (SWS) of 100ml of water. The coordination of SWS and respiration was also studied in some normal cases and ALS patients. Normal subjects could complete the SWS optimally within 10s using 7 swallows, while in dysphagic ALS patients, the total duration and the number of swallows were significantly increased. The novel finding was that the regularity and rhythmicity of the swallowing pattern during SWS was disorganized to irregular and arhythmic pattern in 43% of the ALS patients. The duration and speed of swallowing were the most sensitive parameters for the disturbed oropharyngeal motility during SWS. The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types. The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. Alternating sequential chemotherapy with high-dose ifosfamide and doxorubicin/cyclophosphamide for adult non-small round cell soft tissue sarcomas.

    PubMed

    Kawai, Akira; Umeda, Toru; Wada, Takuro; Ihara, Koichiro; Isu, Kazuo; Abe, Satoshi; Ishii, Takeshi; Sugiura, Hideshi; Araki, Nobuhito; Ozaki, Toshifumi; Yabe, Hiroo; Hasegawa, Tadashi; Tsugane, Shoichiro; Beppu, Yasuo

    2005-05-01

    Doxorubicin and ifosfamide are the two most active agents used to treat soft tissue sarcomas. However, because of their overlapping side effects, concurrent administration to achieve optimal doses of each agent is difficult. We therefore conducted a Phase II trial to investigate the efficacy and feasibility of a novel alternating sequential chemotherapy regimen consisting of high dose ifosfamide and doxorubicin/cyclophosphamide in advanced adult non-small round cell soft tissue sarcomas. Adult patients with non-small round cell soft tissue sarcomas were enrolled. The treatment consisted of four sequential courses of chemotherapy that was planned for every 3 weeks. Cycles 1 and 3 consisted of ifosfamide (14 g/m(2)), and cycles 2 and 4 consisted of doxorubicin (60 mg/m(2)) and cyclophosphamide (1200 mg/m(2)). Forty-two patients (median age 47 years) were enrolled. Of the 36 assessable patients, 1 complete response and 16 partial responses were observed, for a response rate of 47.2%. Responses were observed in 57% of patients who had received no previous chemotherapy and 13% of those who had previously undergone chemotherapy. Grade 3-4 neutropenia was observed during 70% of all cycles. Sequential administration of high-dose ifosfamide and doxorubicin/cyclophosphamide has promising activity with manageable side effects in patients with advanced adult non-small round cell soft tissue sarcomas.

  6. A behavioral and neural evaluation of prospective decision-making under risk.

    PubMed

    Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J

    2010-10-27

    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes.

  7. Sequentially administrated of pemetrexed with icotinib/erlotinib in lung adenocarcinoma cell lines in vitro

    PubMed Central

    Feng, Xiuli; Zhang, Yan; Li, Tao; Li, Yu

    2017-01-01

    Combination of chemotherapy and epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) had been proved to be a potent anti-drug for the treatment of tumors. However, survival time was not extended for the patients with lung adenocarcinoma (AdC) compared with first-line chemotherapy. In the present study, we attempt to assess the optimal schedule of the combined administration of pemetrexed and icotinib/erlotinib in AdC cell lines. Human lung AdC cell lines with wild-type (A549), EGFR T790M (H1975) and activating EGFR mutation (HCC827) were applied in vitro to assess the differential efficacy of various sequential regimens on cell viability, cell apoptosis and cell cycle distribution. The results suggested that the antiproliferative effect of the sequence of pemetrexed followed by icotinib/erlotinib was more effective than that of icotinib/erlotinib followed by pemetrexed. Additionally, a reduction of G1 phase and increased S phase in sequence of pemetrexed followed by icotinib/erlotinib was also observed, promoting cell apoptosis. Thus, the sequential administration of pemetrexed followed by icotinib/erlotinib exerted a synergistic effect on HCC827 and H1975 cell lines compared with the reverse sequence. The sequential treatment of pemetrexed followed by icotinib/erlotinib has been demonstrated promising results. This treatment strategy warrants further confirmation in patients with advanced lung AdC. PMID:29371987

  8. Sequentially administrated of pemetrexed with icotinib/erlotinib in lung adenocarcinoma cell lines in vitro.

    PubMed

    Feng, Xiuli; Zhang, Yan; Li, Tao; Li, Yu

    2017-12-26

    Combination of chemotherapy and epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) had been proved to be a potent anti-drug for the treatment of tumors. However, survival time was not extended for the patients with lung adenocarcinoma (AdC) compared with first-line chemotherapy. In the present study, we attempt to assess the optimal schedule of the combined administration of pemetrexed and icotinib/erlotinib in AdC cell lines. Human lung AdC cell lines with wild-type (A549), EGFR T790M (H1975) and activating EGFR mutation (HCC827) were applied in vitro to assess the differential efficacy of various sequential regimens on cell viability, cell apoptosis and cell cycle distribution. The results suggested that the antiproliferative effect of the sequence of pemetrexed followed by icotinib/erlotinib was more effective than that of icotinib/erlotinib followed by pemetrexed. Additionally, a reduction of G1 phase and increased S phase in sequence of pemetrexed followed by icotinib/erlotinib was also observed, promoting cell apoptosis. Thus, the sequential administration of pemetrexed followed by icotinib/erlotinib exerted a synergistic effect on HCC827 and H1975 cell lines compared with the reverse sequence. The sequential treatment of pemetrexed followed by icotinib/erlotinib has been demonstrated promising results. This treatment strategy warrants further confirmation in patients with advanced lung AdC.

  9. Robust Electrical Transfer System (RETS) for Solar Array Drive Mechanism SlipRing Assembly

    NASA Astrophysics Data System (ADS)

    Bommottet, Daniel; Bossoney, Luc; Schnyder, Ralph; Howling, Alan; Hollenstein, Christoph

    2013-09-01

    Demands for robust and reliable power transmission systems for sliprings for SADM (Solar Array Drive Mechanism) are increasing steadily. As a consequence, it is required to know their performances regarding the voltage breakdown limit.An understanding of the overall shape of the breakdown voltage versus pressure curve is established, based on experimental measurements of DC (Direct Current) gas breakdown in complex geometries compared with a numerical simulation model.In addition a detailed study was made of the functional behaviour of an entire wing of satellite in a like- operational mode, comprising the solar cells, the power transmission lines, the SRA (SlipRing Assembly), the power S3R (Sequential Serial/shunt Switching Regulators) and the satellite load to simulate the electrical power consumption.A test bench able to measure automatically the: a)breakdown voltage versus pressure curve and b)the functional switching performances, was developed and validated.

  10. Lattice modification in KTiOPO4 by hydrogen and helium sequentially implantation in submicrometer depth

    NASA Astrophysics Data System (ADS)

    Ma, Changdong; Lu, Fei; Xu, Bo; Fan, Ranran

    2016-05-01

    We investigated lattice modification and its physical mechanism in H and He co-implanted, z-cut potassium titanyl phosphate (KTiOPO4). The samples were implanted with 110 keV H and 190 keV He, both to a fluence of 4 × 1016 cm-2, at room temperature. Rutherford backscattering/channeling, high-resolution x-ray diffraction, and transmission electron microscopy were used to examine the implantation-induced structural changes and strain. Experimental and simulated x-ray diffraction results show that the strain in the implanted KTiOPO4 crystal is caused by interstitial atoms. The strain and stress are anisotropic and depend on the crystal's orientation. Transmission electron microscopy studies indicate that ion implantation produces many dislocations in the as-implanted samples. Annealing can induce ion aggregation to form nanobubbles, but plastic deformation and ion out-diffusion prevent the KTiOPO4 surface from blistering.

  11. Parasites and poverty: the case of schistosomiasis.

    PubMed

    King, Charles H

    2010-02-01

    Simultaneous and sequential transmission of multiple parasites, and their resultant overlapping chronic infections, are facts of life in many underdeveloped rural areas. These represent significant but often poorly measured health and economic burdens for affected populations. For example, the chronic inflammatory process associated with long-term schistosomiasis contributes to anaemia and undernutrition, which, in turn, can lead to growth stunting, poor school performance, poor work productivity, and continued poverty. To date, most national and international programs aimed at parasite control have not considered the varied economic and ecological factors underlying multi-parasite transmission, but some are beginning to provide a coordinated approach to control. In addition, interest is emerging in new studies for the re-evaluation and recalibration of the health burden of helminthic parasite infection. Their results should highlight the strong potential of integrated parasite control in efforts for poverty reduction. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  13. Optimal position of the transmitter coil for wireless power transfer to the implantable device.

    PubMed

    Jinghui Jian; Stanaćević, Milutin

    2014-01-01

    The maximum deliverable power through inductive link to the implantable device is limited by the tissue exposure to the electromagnetic field radiation. By moving away the transmitter coil from the body, the maximum deliverable power is increased as the magnitude of the electrical field at the interface with the body is kept constant. We demonstrate that the optimal distance between the transmitter coil and the body is on the order of 1 cm when the current of the transmitter coil is limited to 1 A. We also confirm that the conditions on the optimal frequency of the power transmission and the topology of the transmission coil remain the same as if the coil was directly adjacent to the body.

  14. How precise can atoms of a nanocluster be located in 3D using a tilt series of scanning transmission electron microscopy images?

    PubMed

    Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S

    2017-10-01

    In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Can Simple Transmission Chains Foster Collective Intelligence in Binary-Choice Tasks?

    PubMed

    Moussaïd, Mehdi; Seyed Yahosseini, Kyanoush

    2016-01-01

    In many social systems, groups of individuals can find remarkably efficient solutions to complex cognitive problems, sometimes even outperforming a single expert. The success of the group, however, crucially depends on how the judgments of the group members are aggregated to produce the collective answer. A large variety of such aggregation methods have been described in the literature, such as averaging the independent judgments, relying on the majority or setting up a group discussion. In the present work, we introduce a novel approach for aggregating judgments-the transmission chain-which has not yet been consistently evaluated in the context of collective intelligence. In a transmission chain, all group members have access to a unique collective solution and can improve it sequentially. Over repeated improvements, the collective solution that emerges reflects the judgments of every group members. We address the question of whether such a transmission chain can foster collective intelligence for binary-choice problems. In a series of numerical simulations, we explore the impact of various factors on the performance of the transmission chain, such as the group size, the model parameters, and the structure of the population. The performance of this method is compared to those of the majority rule and the confidence-weighted majority. Finally, we rely on two existing datasets of individuals performing a series of binary decisions to evaluate the expected performances of the three methods empirically. We find that the parameter space where the transmission chain has the best performance rarely appears in real datasets. We conclude that the transmission chain is best suited for other types of problems, such as those that have cumulative properties.

  16. Can Simple Transmission Chains Foster Collective Intelligence in Binary-Choice Tasks?

    PubMed Central

    Moussaïd, Mehdi; Seyed Yahosseini, Kyanoush

    2016-01-01

    In many social systems, groups of individuals can find remarkably efficient solutions to complex cognitive problems, sometimes even outperforming a single expert. The success of the group, however, crucially depends on how the judgments of the group members are aggregated to produce the collective answer. A large variety of such aggregation methods have been described in the literature, such as averaging the independent judgments, relying on the majority or setting up a group discussion. In the present work, we introduce a novel approach for aggregating judgments—the transmission chain—which has not yet been consistently evaluated in the context of collective intelligence. In a transmission chain, all group members have access to a unique collective solution and can improve it sequentially. Over repeated improvements, the collective solution that emerges reflects the judgments of every group members. We address the question of whether such a transmission chain can foster collective intelligence for binary-choice problems. In a series of numerical simulations, we explore the impact of various factors on the performance of the transmission chain, such as the group size, the model parameters, and the structure of the population. The performance of this method is compared to those of the majority rule and the confidence-weighted majority. Finally, we rely on two existing datasets of individuals performing a series of binary decisions to evaluate the expected performances of the three methods empirically. We find that the parameter space where the transmission chain has the best performance rarely appears in real datasets. We conclude that the transmission chain is best suited for other types of problems, such as those that have cumulative properties. PMID:27880825

  17. Designing Robust and Resilient Tactical MANETs

    DTIC Science & Technology

    2014-09-25

    Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks , IEEE/ACM Transactions on Networking , (06 2011): 0. doi: N... Wireless Sensor Networks and Effects of Long Range Dependant Data, Special IWSM Issue of Sequential Analysis, (11 2012): 0. doi: A. D. Dominguez...Bushnell, R. Poovendran. A Convex Optimization Approach for Clone Detection in Wireless Sensor Networks , Pervasive and Mobile Computing, (01 2012

  18. Optimal Achievable Encoding for Brain Machine Interface

    DTIC Science & Technology

    2017-12-22

    dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy

  19. Sequential decision making in computational sustainability via adaptive submodularity

    USGS Publications Warehouse

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  20. Sequential Modelling of Building Rooftops by Integrating Airborne LIDAR Data and Optical Imagery: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.

    2013-05-01

    This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.

  1. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  2. Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir

    PubMed Central

    Montaner, Luis J.

    2017-01-01

    Abstract Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. PMID:28520969

  3. Organic nanoparticle systems for spatiotemporal control of multimodal chemotherapy

    PubMed Central

    Meng, Fanfei; Han, Ning; Yeo, Yoon

    2017-01-01

    Introduction Chemotherapeutic drugs are used in combination to target multiple mechanisms involved in cancer cell survival and proliferation. Carriers are developed to deliver drug combinations to common target tissues in optimal ratios and desirable sequences. Nanoparticles (NP) have been a popular choice for this purpose due to their ability to increase the circulation half-life and tumor accumulation of a drug. Areas covered We review organic NP carriers based on polymers, proteins, peptides, and lipids for simultaneous delivery of multiple anticancer drugs, drug/sensitizer combinations, drug/photodynamic- or photothermal therapy combinations, and drug/gene therapeutics with examples in the past three years. Sequential delivery of drug combinations, based on either sequential administration or built-in release control, is introduced with an emphasis on the mechanistic understanding of such control. Expert opinion Recent studies demonstrate how a drug carrier can contribute to co-localizing drug combinations in optimal ratios and dosing sequences to maximize the synergistic effects. We identify several areas for improvement in future research, including the choice of drug combinations, circulation stability of carriers, spatiotemporal control of drug release, and the evaluation and clinical translation of combination delivery. PMID:27476442

  4. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  5. Gene expression profiling gut microbiota in different races of humans

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Yu-Hang; Huang, Tao; Cai, Yu-Dong

    2016-03-01

    The gut microbiome is shaped and modified by the polymorphisms of microorganisms in the intestinal tract. Its composition shows strong individual specificity and may play a crucial role in the human digestive system and metabolism. Several factors can affect the composition of the gut microbiome, such as eating habits, living environment, and antibiotic usage. Thus, various races are characterized by different gut microbiome characteristics. In this present study, we studied the gut microbiomes of three different races, including individuals of Asian, European and American races. The gut microbiome and the expression levels of gut microbiome genes were analyzed in these individuals. Advanced feature selection methods (minimum redundancy maximum relevance and incremental feature selection) and four machine-learning algorithms (random forest, nearest neighbor algorithm, sequential minimal optimization, Dagging) were employed to capture key differentially expressed genes. As a result, sequential minimal optimization was found to yield the best performance using the 454 genes, which could effectively distinguish the gut microbiomes of different races. Our analyses of extracted genes support the widely accepted hypotheses that eating habits, living environments and metabolic levels in different races can influence the characteristics of the gut microbiome.

  6. The potential application of the blackboard model of problem solving to multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.

  7. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  8. Gene expression profiling gut microbiota in different races of humans

    PubMed Central

    Chen, Lei; Zhang, Yu-Hang; Huang, Tao; Cai, Yu-Dong

    2016-01-01

    The gut microbiome is shaped and modified by the polymorphisms of microorganisms in the intestinal tract. Its composition shows strong individual specificity and may play a crucial role in the human digestive system and metabolism. Several factors can affect the composition of the gut microbiome, such as eating habits, living environment, and antibiotic usage. Thus, various races are characterized by different gut microbiome characteristics. In this present study, we studied the gut microbiomes of three different races, including individuals of Asian, European and American races. The gut microbiome and the expression levels of gut microbiome genes were analyzed in these individuals. Advanced feature selection methods (minimum redundancy maximum relevance and incremental feature selection) and four machine-learning algorithms (random forest, nearest neighbor algorithm, sequential minimal optimization, Dagging) were employed to capture key differentially expressed genes. As a result, sequential minimal optimization was found to yield the best performance using the 454 genes, which could effectively distinguish the gut microbiomes of different races. Our analyses of extracted genes support the widely accepted hypotheses that eating habits, living environments and metabolic levels in different races can influence the characteristics of the gut microbiome. PMID:26975620

  9. Games With Estimation of Non-Damage Objectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    1998-09-14

    Games against nature illustrate the role of non-damage objectives in producing conflict with uncertain rewards and the role of probing and estimation in reducing that uncertainty and restoring optimal strategies. This note discusses two essential elements of the analysis of crisis stability omitted from current treatments based on first strike stability: the role of an objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential interactions that could cause those conflicts to deepen. Games against nature illustrate role of objectives and uncertainty that are at the core of detailed treatments of crisis stability. These modelsmore » can also illustrate how these games processes can generate and deepen crises and the optimal strategies that might be used to end them. This note discusses two essential elements of the analysis of crisis stability that are omitted from current treatments based on first strike stability: anon-damage objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential tests that could cause those conflicts to deepen. The model used is a game against nature, simplified sufficiently to make the role of each of those elements obvious.« less

  10. Inferring HIV-1 Transmission Dynamics in Germany From Recently Transmitted Viruses.

    PubMed

    Pouran Yousef, Kaveh; Meixenberger, Karolin; Smith, Maureen R; Somogyi, Sybille; Gromöller, Silvana; Schmidt, Daniel; Gunsenheimer-Bartmeyer, Barbara; Hamouda, Osamah; Kücherer, Claudia; von Kleist, Max

    2016-11-01

    Although HIV continues to spread globally, novel intervention strategies such as treatment as prevention (TasP) may bring the epidemic to a halt. However, their effective implementation requires a profound understanding of the underlying transmission dynamics. We analyzed parameters of the German HIV epidemic based on phylogenetic clustering of viral sequences from recently infected seroconverters with known infection dates. Viral baseline and follow-up pol sequences (n = 1943) from 1159 drug-naïve individuals were selected from a nationwide long-term observational study initiated in 1997. Putative transmission clusters were computed based on a maximum likelihood phylogeny. Using individual follow-up sequences, we optimized our clustering threshold to maximize the likelihood of co-clustering individuals connected by direct transmission. The sizes of putative transmission clusters scaled inversely with their abundance and their distribution exhibited a heavy tail. Clusters based on the optimal clustering threshold were significantly more likely to contain members of the same or bordering German federal states. Interinfection times between co-clustered individuals were significantly shorter (26 weeks; interquartile range: 13-83) than in a null model. Viral intraindividual evolution may be used to select criteria that maximize co-clustering of transmission pairs in the absence of strong adaptive selection pressure. Interinfection times of co-clustered individuals may then be an indicator of the typical time to onward transmission. Our analysis suggests that onward transmission may have occurred early after infection, when individuals are typically unaware of their serological status. The latter argues that TasP should be combined with HIV testing campaigns to reduce the possibility of transmission before TasP initiation.

  11. Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method

    NASA Astrophysics Data System (ADS)

    Huang, Feng; Li, Jing

    2017-12-01

    The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.

  12. Development of a novel optimization tool for electron linacs inspired by artificial intelligence techniques in video games

    NASA Astrophysics Data System (ADS)

    Meier, E.; Biedron, S. G.; LeBlanc, G.; Morgan, M. J.

    2011-03-01

    This paper reports the results of an advanced algorithm for the optimization of electron beam parameters in Free Electron Laser (FEL) Linacs. In the novel approach presented in this paper, the system uses state of the art developments in video games to mimic an operator's decisions to perform an optimization task when no prior knowledge, other than constraints on the actuators is available. The system was tested for the simultaneous optimization of the energy spread and the transmission of the Australian Synchrotron Linac. The proposed system successfully increased the transmission of the machine from 90% to 97% and decreased the energy spread of the beam from 1.04% to 0.91%. Results of a control experiment performed at the new FERMI@Elettra FEL is also reported, suggesting the adaptability of the scheme for beam-based control.

  13. Improved transmission of electrostatic accelerator in a wide range of terminal voltages by controlling the focal strength of entrance acceleration tube

    NASA Astrophysics Data System (ADS)

    Lobanov, Nikolai R.; Tunningley, Thomas; Linardakis, Peter

    2018-04-01

    Tandem electrostatic accelerators often require the flexibility to operate at a variety of terminal voltages to accommodate various user requirements. However, the ion beam transmission will only be optimal for a limited range of terminal voltages. This paper describes the operational performance of a novel focusing system that expands the range of terminal voltages for optimal transmission. This is accomplished by controlling the gradient of the entrance of the low-energy tube, providing an additional focusing element. In this specific case it is achieved by applying up to 150 kV to the fifth electrode of the first unit of the accelerator tube. Numerical simulations and beam transmission tests have been performed to confirm the effectiveness of the lens. An analytical expression has been derived describing its focal properties. These tests demonstrate that the entrance lens control eliminates the need to short out sections of the tube for operation at low terminal voltage.

  14. Game theoretic power allocation and waveform selection for satellite communications

    NASA Astrophysics Data System (ADS)

    Shu, Zhihui; Wang, Gang; Tian, Xin; Shen, Dan; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2015-05-01

    Game theory is a useful method to model interactions between agents with conflicting interests. In this paper, we set up a Game Theoretic Model for Satellite Communications (SATCOM) to solve the interaction between the transmission pair (blue side) and the jammer (red side) to reach a Nash Equilibrium (NE). First, the IFT Game Application Model (iGAM) for SATCOM is formulated to improve the utility of the transmission pair while considering the interference from a jammer. Specifically, in our framework, the frame error rate performance of different modulation and coding schemes is used in the game theoretic solution. Next, the game theoretic analysis shows that the transmission pair can choose the optimal waveform and power given the received power from the jammer. We also describe how the jammer chooses the optimal power given the waveform and power allocation from the transmission pair. Finally, simulations are implemented for the iGAM and the simulation results show the effectiveness of the SATCOM power allocation, waveform selection scheme, and jamming mitigation.

  15. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  16. Bell Helicopter Advanced Rotocraft Transmission (ART) program

    NASA Technical Reports Server (NTRS)

    Henry, Zachary S.

    1995-01-01

    Future rotorcraft transmissions require key emerging material and component technologies using advanced and innovative design practices in order to meet the requirements for a reduced weight to power ratio, a decreased noise level, and a substantially increased reliability. The specific goals for the future rotorcraft transmission when compared with a current state-of-the-art transmission (SOAT) are: (1) a 25 percent weight reduction; (2) a 10 dB reduction in the transmitted noise level; and (3) a system reliability of 5000 hours mean-time-between-removal (MTBR) for the transmission. This report summarizes the work conducted by Bell Helicopter Textron, Inc. to achieve these goals under the Advanced Rotorcraft Transmission (ART) program from 1988 to 1995. The reference aircraft selected by BHTI for the ART program was the Tactical Tiltrotor which is a 17,000 lb gross weight aircraft. A tradeoff study was conducted comparing the ART with a Selected SOAT. The results showed the ART to be 29 percent lighter and up to 13 dB quieter with a calculated MTBR in excess of 5000 hours. The results of the following high risk component and material tests are also presented: (1) sequential meshing high contact ratio planetary with cantilevered support posts; (2) thin dense chrome plated M50 NiL double row spherical roller planetary bearings; (3) reduced kinematic error and increased bending strength spiral bevel gears; (4) high temperature WE43 magnesium housing evaluation and coupon corrosion tests; (5) flexure fatigue tests of precision forged coupons simulating precision forged gear teeth; and (6) flexure fatigue tests of plasma carburized coupons simulating plasma carburized gear teeth.

  17. Design and Optimization of Ultrasonic Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.

    PubMed

    Meng, Miao; Kiani, Mehdi

    2017-02-01

    Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load (R L ) and powering distance (d), the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency (f c ) are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic f c s is found based on the Rx thickness constrain. For a chosen f c within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different f c s to find the optimal f c and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm 3 implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at f c of 1.8 MHz was achieved for R L of 2.5 [Formula: see text] at [Formula: see text]. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm 3 piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at f c of 1.1 MHz for R L of 2.5 [Formula: see text] at [Formula: see text], respectively.

  18. Energy neutral and low power wireless communications

    NASA Astrophysics Data System (ADS)

    Orhan, Oner

    Wireless sensor nodes are typically designed to have low cost and small size. These design objectives impose restrictions on the capacity and efficiency of the transceiver components and energy storage units that can be used. As a result, energy becomes a bottleneck and continuous operation of the sensor network requires frequent battery replacements, increasing the maintenance cost. Energy harvesting and energy efficient transceiver architectures are able to overcome these challenges by collecting energy from the environment and utilizing the energy in an intelligent manner. However, due to the nature of the ambient energy sources, the amount of useful energy that can be harvested is limited and unreliable. Consequently, optimal management of the harvested energy and design of low power transceivers pose new challenges for wireless network design and operation. The first part of this dissertation is on energy neutral wireless networking, where optimal transmission schemes under different system setups and objectives are investigated. First, throughput maximization for energy harvesting two-hop networks with decode-and-forward half-duplex relays is studied. For a system with two parallel relays, various combinations of the following four transmission modes are considered: Broadcast from the source, multi-access from the relays, and successive relaying phases I and II. Next, the energy cost of the processing circuitry as well as the transmission energy are taken into account for communication over a broadband fading channel powered by an energy harvesting transmitter. Under this setup, throughput maximization, energy maximization, and transmission completion time minimization problems are studied. Finally, source and channel coding for an energy-limited wireless sensor node is investigated under various energy constraints including energy harvesting, processing and sampling costs. For each objective, optimal transmission policies are formulated as the solutions of a convex optimization problem, and the properties of these optimal policies are identified. In the second part of this thesis, low power transceiver design is considered for millimeter wave communication systems. In particular, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance.

  19. On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending

    NASA Astrophysics Data System (ADS)

    Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong

    2017-11-01

    A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.

  20. Mutations in the haemagglutinin protein and their effect in transmission of highly pathogenic avian influenza (HPAI) H5N1 virus in sub-optimally vaccinated chickens.

    PubMed

    Sitaras, Ioannis; Rousou, Xanthoula; Peeters, Ben; de Jong, Mart C M

    2016-11-04

    Transmission of highly pathogenic avian influenza (HPAI) viruses in poultry flocks is associated with huge economic losses, culling of millions of birds, as well as human infections and deaths. In the cases where vaccination against avian influenza is used as a control measure, it has been found to be ineffective in preventing transmission of field strains. Reports suggest that one of the reasons for this is the use of vaccine doses much lower than the ones recommended by the manufacturer, resulting in very low levels of immunity. In a previous study, we selected for immune escape mutants using homologous polyclonal sera and used them as vaccines in transmission experiments. We concluded that provided a threshold of immunity is reached, antigenic distance between vaccine and challenge strains due to selection need not result in vaccine escape. Here, we evaluate the effect that the mutations in the haemagglutinin protein of our most antigenically-distant mutant may have in the transmission efficiency of this mutant to chickens vaccinated against the parent strain, under sub-optimal vaccination conditions resembling those often found in the field. In this study we employed reverse genetics techniques and transmission experiments to examine if the HA mutations of our most antigenically-distant mutant affect its efficiency to transmit to vaccinated chickens. In addition, we simulated sub-optimal vaccination conditions in the field, by using a very low vaccine dose. We find that the mutations in the HA protein of our most antigenically-distant mutant are not enough to allow it to evade even low levels of vaccination-induced immunity. Our results suggest that - for the antigenic distances we investigated - vaccination can reduce transmission of an antigenically-distant strain compared to the unvaccinated groups, even when low vaccine doses are used, resulting in low levels of immunity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber

    NASA Technical Reports Server (NTRS)

    Kukhtarev, Nicholai

    2003-01-01

    Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).

  2. Environmentally friendly microwave-assisted sequential extraction method followed by ICP-OES and ion-chromatographic analysis for rapid determination of sulphur forms in coal samples.

    PubMed

    Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine

    2018-05-15

    A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  4. Creating Ruddlesden-Popper phases by hybrid molecular beam epitaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haislmaier, Ryan C.; Stone, Greg; Alem, Nasim

    2016-07-25

    The synthesis of a 50 unit cell thick n = 4 Sr{sub n+1}Ti{sub n}O{sub 3n+1} (Sr{sub 5}Ti{sub 4}O{sub 13}) Ruddlesden-Popper (RP) phase film is demonstrated by sequentially depositing SrO and TiO{sub 2} layers in an alternating fashion using hybrid molecular beam epitaxy (MBE), where Ti was supplied using titanium tetraisopropoxide (TTIP). A detailed calibration procedure is outlined for determining the shuttering times to deposit SrO and TiO{sub 2} layers with precise monolayer doses using in-situ reflection high energy electron diffraction (RHEED) as feedback. Using optimized Sr and TTIP shuttering times, a fully automated growth of the n = 4 RP phase was carried outmore » over a period of >4.5 h. Very stable RHEED intensity oscillations were observed over the entire growth period. The structural characterization by X-ray diffraction and high resolution transmission electron microscopy revealed that a constant periodicity of four SrTiO{sub 3} perovskite unit cell blocks separating the double SrO rocksalt layer was maintained throughout the entire film thickness with a very little amount of planar faults oriented perpendicular to the growth front direction. These results illustrate that hybrid MBE is capable of layer-by-layer growth with atomic level precision and excellent flux stability.« less

  5. [Non-destructive detection research for hollow heart of potato based on semi-transmission hyperspectral imaging and SVM].

    PubMed

    Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong

    2015-01-01

    The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.

  6. Infant survival, HIV infection, and feeding alternatives in less-developed countries.

    PubMed Central

    Kuhn, L; Stein, Z

    1997-01-01

    OBJECTIVES: This study examines, in the context of the human immunodeficiency virus (HIV) epidemic, the effects of optimal breast-feeding, complete avoidance of breast-feeding, and early cessation of breast-feeding. METHODS: The three categories of breast-feeding were weighed in terms of HIV transmission and infant mortality. Estimates of the frequency of adverse outcomes were obtained by simulation. RESULTS: Avoidance of all breast-feeding by the whole population always produces the worst outcome. The lowest frequency of adverse outcomes occurs if no HIV-seropositive women breast-feed and all seronegative women breast-feed optimally, given infant mortality rates below 100 per 1000 and relative risks of dying set at 2.5 for non-breast-fed compared with optimally breast-fed infants. For known HIV-seropositive mothers, fewer adverse outcomes result from early cessation than from prolonged breast-feeding if the hazard of HIV transmission through breast-feeding after 3 months is 7% or more, even at high mortality rates, given relative risks of dying set at 1.5 for early cessation compared with optimal duration of breast-feeding. CONCLUSIONS: The risk of HIV transmission through breast-feeding at various ages needs to be more precisely quantified. The grave issues that may accompany a possible decline in breast-feeding in the less developed world demand evaluation. PMID:9224171

  7. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  8. Modeling and optimization of lime-based stabilization in high alkaline arsenic-bearing sludges with a central composite design.

    PubMed

    Lei, Jie; Peng, Bing; Min, Xiaobo; Liang, Yanjie; You, Yang; Chai, Liyuan

    2017-04-16

    This study focuses on the modeling and optimization of lime-based stabilization in high alkaline arsenic-bearing sludges (HAABS) and describes the relationship between the arsenic leachate concentration (ALC) and stabilization parameters to develop a prediction model for obtaining the optimal process parameters and conditions. A central composite design (CCD) along with response surface methodology (RSM) was conducted to model and investigate the stabilization process with three independent variables: the Ca/As mole ratio, reaction time and liquid/solid ratio, along with their interactions. The obvious characteristic changes of the HAABS before and after stabilization were verified by X-ray diffraction (XRD), scanning electron microscopy (SEM), particle size distribution (PSD) and the community bureau of reference (BCR) sequential extraction procedure. A prediction model Y (ALC) with a statistically significant P-value <0.01 and high correlation coefficient R 2 = 93.22% was obtained. The optimal parameters were successfully predicted by the model for the minimum ALC of 0.312 mg/L, which was validated with the experimental result (0.306 mg/L). The XRD, SEM and PSD results indicated that crystal calcium arsenate Ca 5 (AsO 4 ) 3 OH and Ca 4 (OH) 2 (AsO 4 ) 2 ·4H 2 O formation played an important role in minimizing the ALC. The BCR sequential extraction results demonstrated that the treated HAABS were stable in a weak acidic environment for a short time but posed a potential environmental risk after a long time. The results clearly confirm that the proposed three-factor CCD is an effective approach for modeling the stabilization of HAABS. However, further solidification technology is suggested for use after lime-based stabilization treatment of arsenic-bearing sludges.

  9. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  10. A two-phase investment model for optimal allocation of phasor measurement units considering transmission switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousavian, Seyedamirabbas; Valenzuela, Jorge; Wang, Jianhui

    2015-02-01

    Ensuring the reliability of an electrical power system requires a wide-area monitoring and full observability of the state variables. Phasor measurement units (PMUs) collect in real time synchronized phasors of voltages and currents which are used for the observability of the power grid. Due to the considerable cost of installing PMUs, it is not possible to equip all buses with PMUs. In this paper, we propose an integer linear programming model to determine the optimal PMU placement plan in two investment phases. In the first phase, PMUs are installed to achieve full observability of the power grid whereas additional PMUsmore » are installed in the second phase to guarantee the N - 1 observability of the power grid. The proposed model also accounts for transmission switching and single contingencies such as failure of a PMU or a transmission line. Results are provided on several IEEE test systems which show that our proposed approach is a promising enhancement to the methods available for the optimal placement of PMUs.« less

  11. A Novel Optimal Joint Resource Allocation Method in Cooperative Multicarrier Networks: Theory and Practice

    PubMed Central

    Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng

    2016-01-01

    With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865

  12. Optimal control of the gear shifting process for shift smoothness in dual-clutch transmissions

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Görges, Daniel

    2018-03-01

    The control of the transmission system in vehicles is significant for the driving comfort. In order to design a controller for smooth shifting and comfortable driving, a dynamic model of a dual-clutch transmission is presented in this paper. A finite-time linear quadratic regulator is proposed for the optimal control of the two friction clutches in the torque phase for the upshift process. An integral linear quadratic regulator is introduced to regulate the relative speed difference between the engine and the slipping clutch under the optimization of the input torque during the inertia phase. The control objective focuses on smoothing the upshift process so as to improve the driving comfort. Considering the available sensors in vehicles for feedback control, an observer design is presented to track the immeasurable variables. Simulation results show that the jerk can be reduced both in the torque phase and inertia phase, indicating good shift performance. Furthermore, compared with conventional controllers for the upshift process, the proposed control method can reduce shift jerk and improve shift quality.

  13. Theory of optimal information transmission in E. coli chemotaxis pathway

    NASA Astrophysics Data System (ADS)

    Micali, Gabriele; Endres, Robert G.

    Bacteria live in complex microenvironments where they need to make critical decisions fast and reliably. These decisions are inherently affected by noise at all levels of the signaling pathway, and cells are often modeled as an input-output device that transmits extracellular stimuli (input) to internal proteins (channel), which determine the final behavior (output). Increasing the amount of transmitted information between input and output allows cells to better infer extracellular stimuli and respond accordingly. However, in contrast to electronic devices, the separation into input, channel, and output is not always clear in biological systems. Output might feed back into the input, and the channel, made by proteins, normally interacts with the input. Furthermore, a biological channel is affected by mutations and can change under evolutionary pressure. Here, we present a novel approach to maximize information transmission: given cell-external and internal noise, we analytically identify both input distributions and input-output relations that optimally transmit information. Using E. coli chemotaxis as an example, we conclude that its pathway is compatible with an optimal information transmission device despite the ultrasensitive rotary motors.

  14. Inductive power transmission to millimeter-sized biomedical implants using printed spiral coils.

    PubMed

    Ibrahim, Ahmed; Kiani, Mehdi

    2016-08-01

    The operation frequency (f) has been a key parameter in optimizing wireless power transmission links for biomedical implants with millimeter (mm) dimensions. This paper studies the feasibility of using printed spiral coils (PSCs) for powering mm-sized implants with high power transmission efficiency (PTE) at different fps. Compared to wire-wound coils (WWCs), using a PSC in the implant side allows batch fabrication on rigid or flexible substrates, which can also be used as a platform for integrating implant components. For powering an implant with 1 mm diameter, located 10 mm inside the tissue, the geometries of transmitter (Tx) and receiver (Rx) PSCs were optimized at different fPs of 50 MHz, 200 MHz, and 500 MHz using a commercial field solver (HFSS). In simulations, PSC- and WWC-based links achieved maximum PTE of 0.13% and 3.3%, and delivered power of 65.7 μW and 720 μW under specific absorption rate (SAR) constraints at the optimal fp of 50 MHz and 100 MHz, respectively, suggesting that the performance of the PSC-based link is significantly inferior to that of the WWC-based link.

  15. A novel nanoscaled Schottky barrier based transmission gate and its digital circuit applications

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Loan, Sajad A.; Alamoud, Abdulrahman M.

    2017-04-01

    In this work we propose and simulate a compact nanoscaled transmission gate (TG) employing a single Schottky barrier based transistor in the transmission path and a single transistor based Sajad-Sunil-Schottky (SSS) device as an inverter. Therefore, just two transistors are employed to realize a complete transmission gate which normally consumes four transistors in the conventional technology. The transistors used to realize the transmission path and the SSS inverter in the proposed TG are the double gate Schottky barrier devices, employing stacks of two metal silicides, platinum silicide (PtSi) and erbium silicide (ErSi). It has been observed that the realization of the TG gate by the proposed technology has resulted into a compact structure, with reduced component count, junctions, interconnections and regions in comparison to the conventional technology. The further focus of this work is on the application part of the proposed technology. So for the first time, the proposed technology has been used to realize various combinational circuits, like a two input AND gate, a 2:1 multiplexer and a two input XOR circuits. It has been observed that the transistor count has got reduced by half in a TG, two input AND gate, 2:1 multiplexer and in a two input XOR gate. Therefore, a significant reduction in transistor count and area requirement can be achieved by using the proposed technology. The proposed technology can be also used to perform the compact realization of other combinational and sequential circuitry in future.

  16. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  17. Melioration as rational choice: sequential decision making in uncertain environments.

    PubMed

    Sims, Chris R; Neth, Hansjörg; Jacobs, Robert A; Gray, Wayne D

    2013-01-01

    Melioration-defined as choosing a lesser, local gain over a greater longer term gain-is a behavioral tendency that people and pigeons share. As such, the empirical occurrence of meliorating behavior has frequently been interpreted as evidence that the mechanisms of human choice violate the norms of economic rationality. In some environments, the relationship between actions and outcomes is known. In this case, the rationality of choice behavior can be evaluated in terms of how successfully it maximizes utility given knowledge of the environmental contingencies. In most complex environments, however, the relationship between actions and future outcomes is uncertain and must be learned from experience. When the difficulty of this learning challenge is taken into account, it is not evident that melioration represents suboptimal choice behavior. In the present article, we examine human performance in a sequential decision-making experiment that is known to induce meliorating behavior. In keeping with previous results using this paradigm, we find that the majority of participants in the experiment fail to adopt the optimal decision strategy and instead demonstrate a significant bias toward melioration. To explore the origins of this behavior, we develop a rational analysis (Anderson, 1990) of the learning problem facing individuals in uncertain decision environments. Our analysis demonstrates that an unbiased learner would adopt melioration as the optimal response strategy for maximizing long-term gain. We suggest that many documented cases of melioration can be reinterpreted not as irrational choice but rather as globally optimal choice under uncertainty.

  18. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  19. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  20. Energy-Water Nexus: Balancing the Tradeoffs between Two-Level Decision Makers

    DOE PAGES

    Zhang, Xiaodong; Vesselinov, Velimir Valentinov

    2016-09-03

    Energy-water nexus has substantially increased importance in the recent years. Synergistic approaches based on systems-analysis and mathematical models are critical for helping decision makers better understand the interrelationships and tradeoffs between energy and water. In energywater nexus management, various decision makers with different goals and preferences, which are often conflicting, are involved. These decision makers may have different controlling power over the management objectives and the decisions. They make decisions sequentially from the upper level to the lower level, challenging decision making in energy-water nexus. In order to address such planning issues, a bi-level decision model is developed, which improvesmore » upon the existing studies by integration of bi-level programming into energy-water nexus management. The developed model represents a methodological contribution to the challenge of sequential decisionmaking in energy-water nexus through provision of an integrated modeling framework/tool. An interactive fuzzy optimization methodology is introduced to seek a satisfactory solution to meet the overall satisfaction of the two-level decision makers. The tradeoffs between the two-level decision makers in energy-water nexus management are effectively addressed and quantified. Application of the proposed model to a synthetic example problem has demonstrated its applicability in practical energy-water nexus management. Optimal solutions for electricity generation, fuel supply, water supply including groundwater, surface water and recycled water, capacity expansion of the power plants, and GHG emission control are generated. In conclusion, these analyses are capable of helping decision makers or stakeholders adjust their tolerances to make informed decisions to achieve the overall satisfaction of energy-water nexus management where bi-level sequential decision making process is involved.« less

  1. Energy-Water Nexus: Balancing the Tradeoffs between Two-Level Decision Makers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaodong; Vesselinov, Velimir Valentinov

    Energy-water nexus has substantially increased importance in the recent years. Synergistic approaches based on systems-analysis and mathematical models are critical for helping decision makers better understand the interrelationships and tradeoffs between energy and water. In energywater nexus management, various decision makers with different goals and preferences, which are often conflicting, are involved. These decision makers may have different controlling power over the management objectives and the decisions. They make decisions sequentially from the upper level to the lower level, challenging decision making in energy-water nexus. In order to address such planning issues, a bi-level decision model is developed, which improvesmore » upon the existing studies by integration of bi-level programming into energy-water nexus management. The developed model represents a methodological contribution to the challenge of sequential decisionmaking in energy-water nexus through provision of an integrated modeling framework/tool. An interactive fuzzy optimization methodology is introduced to seek a satisfactory solution to meet the overall satisfaction of the two-level decision makers. The tradeoffs between the two-level decision makers in energy-water nexus management are effectively addressed and quantified. Application of the proposed model to a synthetic example problem has demonstrated its applicability in practical energy-water nexus management. Optimal solutions for electricity generation, fuel supply, water supply including groundwater, surface water and recycled water, capacity expansion of the power plants, and GHG emission control are generated. In conclusion, these analyses are capable of helping decision makers or stakeholders adjust their tolerances to make informed decisions to achieve the overall satisfaction of energy-water nexus management where bi-level sequential decision making process is involved.« less

  2. Relevant factors for the optimal duration of extended endocrine therapy in early breast cancer.

    PubMed

    Blok, Erik J; Kroep, Judith R; Meershoek-Klein Kranenbarg, Elma; Duijm-de Carpentier, Marjolijn; Putter, Hein; Liefers, Gerrit-Jan; Nortier, Johan W R; Rutgers, Emiel J Th; Seynaeve, Caroline M; van de Velde, Cornelis J H

    2018-04-01

    For postmenopausal patients with hormone receptor-positive early breast cancer, the optimal subgroup and duration of extended endocrine therapy is not clear yet. The aim of this study using the IDEAL patient cohort was to identify a subgroup for which longer (5 years) extended therapy is beneficial over shorter (2.5 years) extended endocrine therapy. In the IDEAL trial, 1824 patients who completed 5 years of adjuvant endocrine therapy (either 5 years of tamoxifen (12%), 5 years of an AI (29%), or a sequential strategy of both (59%)) were randomized between either 2.5 or 5 years of extended letrozole. For each prior therapy subgroup, the value of longer therapy was assessed for both node-negative and node-positive patients using Kaplan Meier and Cox regression survival analyses. In node-positive patients, there was a significant benefit of 5 years (over 2.5 years) of extended therapy (disease-free survival (DFS) HR 0.67, p = 0.03, 95% CI 0.47-0.96). This effect was only observed in patients who were treated initially with a sequential scheme (DFS HR 0.60, p = 0.03, 95% CI 0.38-0.95). In all other subgroups, there was no significant benefit of longer extended therapy. Similar results were found in patients who were randomized for their initial adjuvant therapy in the TEAM trial (DFS HR 0.37, p = 0.07, 95% CI 0.13-1.06), although this additional analysis was underpowered for definite conclusions. This study suggests that node-positive patients could benefit from longer extended endocrine therapy, although this effect appears isolated to patients treated with sequential endocrine therapy during the first 5 years and needs validation and long-term follow-up.

  3. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  4. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  5. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  6. Sequential Use of Anaplastic Lymphoma Kinase Inhibitors in Japanese Patients With ALK-Rearranged Non-Small-Cell Lung Cancer: A Retrospective Analysis.

    PubMed

    Asao, Tetsuhiko; Fujiwara, Yutaka; Itahashi, Kota; Kitahara, Shinsuke; Goto, Yasushi; Horinouchi, Hidehito; Kanda, Shintaro; Nokihara, Hiroshi; Yamamoto, Noboru; Takahashi, Kazuhisa; Ohe, Yuichiro

    2017-07-01

    Second-generation anaplastic lymphoma kinase (ALK) inhibitors, such as alectinib and ceritinib, have recently been approved for treatment of ALK-rearranged non-small-cell lung cancer (NSCLC). An optimal strategy for using 2 or more ALK inhibitors has not been established. We sought to investigate the clinical impact of sequential use of ALK inhibitors on these tumors in clinical practice. Patients with ALK-rearranged NSCLC treated from May 2010 to January 2016 at the National Cancer Center Hospital were identified, and their outcomes were evaluated retrospectively. Fifty-nine patients with ALK-rearranged NSCLC had been treated and 37 cases were assessable. Twenty-six received crizotinib, 21 received alectinib, and 13 (35.1%) received crizotinib followed by alectinib. Response rates and median progression-free survival (PFS) on crizotinib and alectinib (after crizotinib failure) were 53.8% (95% confidence interval [CI], 26.7%-80.9%) and 38.4% (95% CI, 12.0%-64.9%), and 10.7 (95% CI, 5.3-14.7) months and 16.6 (95% CI, 2.9-not calculable), respectively. The median PFS of patients on sequential therapy was 35.2 months (95% CI, 12.7 months-not calculable). The 5-year survival rate of ALK-rearranged patients who received 2 sequential ALK inhibitors from diagnosis was 77.8% (95% CI, 36.5%-94.0%). The combined PFS and 5-year survival rates in patients who received sequential ALK inhibitors were encouraging. Making full use of multiple ALK inhibitors might be important to prolonging survival in patients with ALK-rearranged NSCLC. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A Method To Determine the Kinetics of Solute Mixing in Liquid/Liquid Formulation Dual-Chamber Syringes.

    PubMed

    Werk, Tobias; Mahler, Hanns-Christian; Ludwig, Imke Sonja; Luemkemann, Joerg; Huwyler, Joerg; Hafner, Mathias

    Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products, which cannot be co-formulated due to technical or regulatory issues. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercial dual-chamber syringes (with a bypass designed as a longitudinal ridge) when the two liquids significantly differ in their physical properties (viscosity, density). However, an optimized dual-chamber syringe design with multiple bypass channels resulted in improved mixing of liquids. Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercially available dual-chamber syringes when the two liquids significantly differ in viscosity and density. However, an optimized dual-chamber syringe design resulted in improved mixing of liquids. © PDA, Inc. 2017.

  8. Active control of transmission loss with smart foams.

    PubMed

    Kundu, Abhishek; Berry, Alain

    2011-02-01

    Smart foams combine the complimentary advantages of passive foam material and spatially distributed piezoelectric actuator embedded in it for active noise control applications. In this paper, the problem of improving the transmission loss of smart foams using active control strategies has been investigated both numerically and experimentally inside a waveguide under the condition of plane wave propagation. The finite element simulation of a coupled noise control system has been undertaken with three different smart foam designs and their effectiveness in cancelling the transmitted wave downstream of the smart foam have been studied. The simulation results provide insight into the physical phenomenon of active noise cancellation and explain the impact of the smart foam designs on the optimal active control results. Experimental studies aimed at implementing the real-time control for transmission loss optimization have been performed using the classical single input/single output filtered-reference least mean squares algorithm. The active control results with broadband and single frequency primary source inputs demonstrate a good improvement in the transmission loss of the smart foams. The study gives a comparative description of the transmission and absorption control problems in light of the modification of the vibration response of the piezoelectric actuator under active control.

  9. An Energy Balanced and Lifetime Extended Routing Protocol for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Lu, Luxi

    2018-05-17

    Energy limitation is an adverse problem in designing routing protocols for underwater sensor networks (UWSNs). To prolong the network lifetime with limited battery power, an energy balanced and efficient routing protocol, called energy balanced and lifetime extended routing protocol (EBLE), is proposed in this paper. The proposed EBLE not only balances traffic loads according to the residual energy, but also optimizes data transmissions by selecting low-cost paths. Two phases are operated in the EBLE data transmission process: (1) candidate forwarding set selection phase and (2) data transmission phase. In candidate forwarding set selection phase, nodes update candidate forwarding nodes by broadcasting the position and residual energy level information. The cost value of available nodes is calculated and stored in each sensor node. Then in data transmission phase, high residual energy and relatively low-cost paths are selected based on the cost function and residual energy level information. We also introduce detailed analysis of optimal energy consumption in UWSNs. Numerical simulation results on a variety of node distributions and data load distributions prove that EBLE outperforms other routing protocols (BTM, BEAR and direct transmission) in terms of network lifetime and energy efficiency.

  10. Thermal and Structural Analysis of Helicopter Transmission Housings Using NASTRAN

    NASA Technical Reports Server (NTRS)

    Howells, R. W.; Sciarra, J. J.; Ng, G. S.

    1976-01-01

    The application of NASTRAN to improve the design of helicopter transmission housings is described. A finite element model of the complete forward rotor transmission housing for the Boeing Vertol CH-47C helicopter was used to study thermal distortion and stress, stress and deflection due to static and dynamic loads, load paths, and design optimization by the control of structural energy distribution. The analytical results are correlated with test data and used to reduce weight and to improve strength, service life, failsafety, and reliability. The techniques presented, although applied herein to helicopter transmissions, are sufficiently general to be applicable to any power transmission system.

  11. Use of combined radar and radiometer systems in space for precipitation measurement: Some ideas

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1981-01-01

    A brief survey is given of some fundamental physical concepts of optimal polarization characteristics of a transmission path or scatter ensemble of hydrometers. It is argued that, based on this optimization concept, definite advances in remote atmospheric sensing are to be expected. Basic properties of Kennaugh's optimal polarization theory are identified.

  12. Optimal design and use of retry in fault tolerant real-time computer systems

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1983-01-01

    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.

  13. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  14. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  15. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  16. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  17. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  18. Analisis parametrico de las variables que influyen en el comportamiento adherente de las armaduras pretesas en el hormigon

    NASA Astrophysics Data System (ADS)

    Arbelaez Jaramillo, Cesar Augusto

    Prestressed concrete technique through the use of prestressed reinforcement is extended in the precast concrete industry. This technique consists on casting a concrete element over a previously prestressed reinforcement, proceeding to release once the concrete has reached a determined strength so the prestressed stress introduced to the reinforcement be transmitted, by bond, to concrete. The bond behaviour of prestressed reinforcement includes two phenomena: prestress transmission from the reinforcement to concrete and anchorage of the reinforcement. This bond behaviour is characterized by mean of two lengths: transmission length and anchorage length. The good design of these lengths is a basic and fundamental aspect in the project of precast prestressed concrete elements to guaranty the appropriate transmission of prestress and to allow the anchorage of the reinforcement along the structural element service life. The influence of the parameters related to the concrete dosage on the transmission and anchorage lengths of prestressing strands have been analyzed. The ECADA test method has been applied. With this method the operations of transmission of prestress and anchorage of the reinforcement are sequentially done. The transmission and anchorage lengths are determined from the force control supported by the reinforcement testing series of specimens with different embedment lengths. The differentiation of the concepts of anchorage length without slips and with slips has been proposed. The relationship of the parameters of dosage with the bond stress and the registered slips during the processes of transmission and anchorage has been studied. Expressions to value the slips distribution of the reinforcement in the transmission zone and in the anchorage zone have been proposed. A study on the determination of the transmission length from the free reinforcement slip end has been done and the viability to experimentally determine the transmission length from the slips sequence in the pull-out end as a function of the embedment length has been verified. The experimental results have been compared with results and predictions from other authors and standards, and an expression to calculate the transmission length have been proposed. Finally, the bond behaviour of self-compacting concretes has been compared with the bond behaviour of traditional concretes.

  19. Optimized Signaling Method for High-Speed Transmission Channels with Higher Order Transfer Function

    NASA Astrophysics Data System (ADS)

    Ševčík, Břetislav; Brančík, Lubomír; Kubíček, Michal

    2017-08-01

    In this paper, the selected results from testing of optimized CMOS friendly signaling method for high-speed communications over cables and printed circuit boards (PCBs) are presented and discussed. The proposed signaling scheme uses modified concept of pulse width modulated (PWM) signal which enables to better equalize significant channel losses during data high-speed transmission. Thus, the very effective signaling method to overcome losses in transmission channels with higher order transfer function, typical for long cables and multilayer PCBs, is clearly analyzed in the time and frequency domain. Experimental results of the measurements include the performance comparison of conventional PWM scheme and clearly show the great potential of the modified signaling method for use in low power CMOS friendly equalization circuits, commonly considered in modern communication standards as PCI-Express, SATA or in Multi-gigabit SerDes interconnects.

  20. Research on hybrid transmission mode for HVDC with optimal thermal power and renewable energy combination

    NASA Astrophysics Data System (ADS)

    Zhang, Jinfang; Yan, Xiaoqing; Wang, Hongfu

    2018-02-01

    With the rapid development of renewable energy in Northwest China, curtailment phenomena is becoming more and more serve owing to lack of adjustment ability and enough transmission capacity. Based on the existing HVDC projects, exploring the hybrid transmission mode associated with thermal power and renewable power will be necessary and important. This paper has proposed a method on optimal thermal power and renewable energy combination for HVDC lines, based on multi-scheme comparison. Having established the mathematic model for electric power balance in time series mode, ten different schemes have been picked for figuring out the suitable one by test simulation. By the proposed related discriminated principle, including generation device utilization hours, renewable energy electricity proportion and curtailment level, the recommendation scheme has been found. The result has also validated the efficiency of the method.

  1. Investigation of 16 × 10 Gbps DWDM System Based on Optimized Semiconductor Optical Amplifier

    NASA Astrophysics Data System (ADS)

    Rani, Aruna; Dewra, Sanjeev

    2017-08-01

    This paper investigates the performance of an optical system based on optimized semiconductor optical amplifier (SOA) at 160 Gbps with 0.8 nm channel spacing. Transmission distances up to 280 km at -30 dBm input signal power and up to 247 km at -32 dBm input signal power with acceptable bit error rate (BER) and Q-factor are examined. It is also analyzed that the transmission distance up to 292 km has been covered at -28 dBm input signal power using Dispersion Shifted (DS)-Normal fiber without any power compensation methods.

  2. Geometrical optimization of the transmission and dispersion properties of arrayed waveguide gratings using two stigmatic point mountings.

    PubMed

    Muñoz, P; Pastor, D; Capmany, J; Martínez, A

    2003-09-22

    In this paper, the procedure to optimize flat-top Arrayed Waveguide Grating (AWG) devices in terms of transmission and dispersion properties is presented. The systematic procedure consists on the stigmatization and minimization of the Light Path Function (LPF) used in classic planar spectrograph theory. The resulting geometry arrangement for the Arrayed Waveguides (AW) and the Output Waveguides (OW) is not the classical Rowland mounting, but an arbitrary geometry arrangement. Simulation using previous published enhanced modeling show how this geometry reduces the passband ripple, asymmetry and dispersion, in a design example.

  3. Optimized organic photovoltaics with surface plasmons

    NASA Astrophysics Data System (ADS)

    Omrane, B.; Landrock, C.; Aristizabal, J.; Patel, J. N.; Chuo, Y.; Kaminska, B.

    2010-06-01

    In this work, a new approach for optimizing organic photovoltaics using nanostructure arrays exhibiting surface plasmons is presented. Periodic nanohole arrays were fabricated on gold- and silver-coated flexible substrates, and were thereafter used as light transmitting anodes for solar cells. Transmission measurements on the plasmonic thin film made of gold and silver revealed enhanced transmission at specific wavelengths matching those of the photoactive polymer layer. Compared to the indium tin oxide-based photovoltaic cells, the plasmonic solar cells showed overall improvements in efficiency up to 4.8-fold for gold and 5.1-fold for the silver, respectively.

  4. 78 FR 18974 - Increasing Market and Planning Efficiency Through Improved Software; Notice of Technical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... bring together experts from diverse backgrounds and experiences including electric system operators... transmission switching; AC optimal power flow modeling; and use of active and dynamic transmission ratings. In... variability of the system, including forecast error? [cir] How can outage probability be captured in...

  5. The Cost-Optimal Distribution of Wind and Solar Generation Facilities in a Simplified Highly Renewable European Power System

    NASA Astrophysics Data System (ADS)

    Kies, Alexander; von Bremen, Lüder; Schyska, Bruno; Chattopadhyay, Kabitri; Lorenz, Elke; Heinemann, Detlev

    2016-04-01

    The transition of the European power system from fossil generation towards renewable sources is driven by different reasons like decarbonisation and sustainability. Renewable power sources like wind and solar have, due to their weather dependency, fluctuating feed-in profiles, which make their system integration a difficult task. To overcome this issue, several solutions have been investigated in the past like the optimal mix of wind and PV [1], the extension of the transmission grid or storages [2]. In this work, the optimal distribution of wind turbines and solar modules in Europe is investigated. For this purpose, feed-in data with an hourly temporal resolution and a spatial resolution of 7 km covering Europe for the renewable sources wind, photovoltaics and hydro was used. Together with historical load data and a transmission model , a simplified pan-European power power system was simulated. Under cost assumptions of [3] the levelized cost of electricity (LCOE) for this simplified system consisting of generation, consumption, transmission and backup units is calculated. With respect to the LCOE, the optimal distribution of generation facilities in Europe is derived. It is shown, that by optimal placement of renewable generation facilities the LCOE can be reduced by more than 10% compared to a meta study scenario [4] and a self-sufficient scenario (every country produces on average as much from renewable sources as it consumes). This is mainly caused by a shift of generation facilities towards highly suitable locations, reduced backup and increased transmission need. The results of the optimization will be shown and implications for the extension of renewable shares in the European power mix will be discussed. The work is part of the RESTORE 2050 project (Wuppertal Institute, Next Energy, University of Oldenburg), that is financed by the Federal Ministry of Education and Research (BMBF, Fkz. 03SFF0439A). [1] Kies, A. et al.: Kies, Alexander, et al. "Investigation of balancing effects in long term renewable energy feed-in with respect to the transmission grid." Advances in Science and Research 12.1 (2015): 91-95, doi:10.5194/asr-12-91-2015 [2] Heide, Dominik, et al. "Reduced storage and balancing needs in a fully renewable European power system with excess wind and solar power generation." Renewable Energy 36.9 (2011): 2515-2523 [3] Rodriguez, R.A.: Weather-driven power transmission in a highly renewable European electricity network, PhD Thesis, Aarhus University, November 2014 [4] Pfluger, B. et al.: Tangible ways towards climate protection in the European Union (EU long-term scenarios 2050), Fraunhofer ISI, Karlsruhe, September 2011

  6. Efficiency optimization of wireless power transmission systems for active capsule endoscopes.

    PubMed

    Zhiwei, Jia; Guozheng, Yan; Jiangpingping; Zhiwu, Wang; Hua, Liu

    2011-10-01

    Multipurpose active capsule endoscopes have drawn considerable attention in recent years, but these devices continue to suffer from energy limitations. A wireless power supply system is regarded as a practical way to overcome the power shortage problem in such devices. This paper focuses on the efficiency optimization of a wireless energy supply system with size and safety constraints. A mathematical programming model in which these constraints are considered is proposed for transmission efficiency, optimal frequency and current, and overall system effectiveness. To verify the feasibility of the proposed method, we use a wireless active capsule endoscope as an illustrative example. The achieved efficiency can be regarded as an index for evaluating the system, and the proposed approach can be used to direct the design of transmitting and receiving coils.

  7. New estimation architecture for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Covino, Joseph M.; Griffiths, Barry E.

    1991-07-01

    This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.

  8. Analytical Method for Determining Tetrazene in Water.

    DTIC Science & Technology

    1987-12-01

    8217-decanesulfonic acid sodium salt. The mobile phase pH was adjusted to 3 with glacial acetic acid. The modified mobile phase was optimal for separating of...modified with sodium tartrate, gave a well-defined reduction wave at the dropping mercury electrode. The height of the reduction wave was proportional to...anitmony trisulphide, nitrocellulose, PETN, powdered aluminum and calcium silicide . The primer samples were sequentially extracted, first with

  9. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  10. Modular synthesis of a dual metal-dual semiconductor nano-heterostructure

    DOE PAGES

    Amirav, Lilac; Oba, Fadekemi; Aloni, Shaul; ...

    2015-04-29

    Reported is the design and modular synthesis of a dual metal-dual semiconductor heterostructure with control over the dimensions and placement of its individual components. Analogous to molecular synthesis, colloidal synthesis is now evolving into a series of sequential synthetic procedures with separately optimized steps. Here we detail the challenges and parameters that must be considered when assembling such a multicomponent nanoparticle, and their solutions.

  11. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  12. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  13. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  14. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  16. Level-Set Topology Optimization with Aeroelastic Constraints

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  17. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  18. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  19. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  20. Focusing of light through turbid media by curve fitting optimization

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi

    2016-12-01

    The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.

Top